Dec 13 14:13:31.946444 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 14:13:31.946480 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 14:13:31.946503 kernel: efi: EFI v2.70 by EDK II Dec 13 14:13:31.946518 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7171cf98 Dec 13 14:13:31.946532 kernel: ACPI: Early table checksum verification disabled Dec 13 14:13:31.946545 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 14:13:31.946581 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 14:13:31.946598 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 14:13:31.946612 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 14:13:31.946626 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 14:13:31.946645 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 14:13:31.946659 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 14:13:31.946673 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 14:13:31.946687 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 14:13:31.946703 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 14:13:31.946722 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 14:13:31.946737 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 14:13:31.946751 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 14:13:31.946766 kernel: printk: bootconsole [uart0] enabled Dec 13 14:13:31.946780 kernel: NUMA: Failed to initialise from firmware Dec 13 14:13:31.946795 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 14:13:31.946810 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Dec 13 14:13:31.946824 kernel: Zone ranges: Dec 13 14:13:31.946839 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 14:13:31.946853 kernel: DMA32 empty Dec 13 14:13:31.946868 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 14:13:31.946886 kernel: Movable zone start for each node Dec 13 14:13:31.946901 kernel: Early memory node ranges Dec 13 14:13:31.946915 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 14:13:31.946930 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 14:13:31.946944 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 14:13:31.946958 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 14:13:31.946973 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 14:13:31.946987 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 14:13:31.947001 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 14:13:31.947016 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 14:13:31.947030 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 14:13:31.947045 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 14:13:31.947063 kernel: psci: probing for conduit method from ACPI. Dec 13 14:13:31.947078 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 14:13:31.947098 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:13:31.947114 kernel: psci: Trusted OS migration not required Dec 13 14:13:31.947129 kernel: psci: SMC Calling Convention v1.1 Dec 13 14:13:31.947148 kernel: ACPI: SRAT not present Dec 13 14:13:31.947164 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 14:13:31.947179 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 14:13:31.947195 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 14:13:31.947210 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:13:31.947225 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:13:31.947241 kernel: CPU features: detected: Spectre-v2 Dec 13 14:13:31.947256 kernel: CPU features: detected: Spectre-v3a Dec 13 14:13:31.947271 kernel: CPU features: detected: Spectre-BHB Dec 13 14:13:31.947286 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:13:31.947301 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:13:31.947320 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 14:13:31.947336 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 14:13:31.947351 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 14:13:31.947367 kernel: Policy zone: Normal Dec 13 14:13:31.947384 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:13:31.947401 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:13:31.947416 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:13:31.947431 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:13:31.947447 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:13:31.947462 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 14:13:31.947482 kernel: Memory: 3824524K/4030464K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 205940K reserved, 0K cma-reserved) Dec 13 14:13:31.947498 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:13:31.947513 kernel: trace event string verifier disabled Dec 13 14:13:31.947528 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:13:31.947544 kernel: rcu: RCU event tracing is enabled. Dec 13 14:13:31.947576 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:13:31.947595 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:13:31.947610 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:13:31.947626 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:13:31.947641 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:13:31.947656 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:13:31.947671 kernel: GICv3: 96 SPIs implemented Dec 13 14:13:31.947691 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:13:31.947706 kernel: GICv3: Distributor has no Range Selector support Dec 13 14:13:31.947721 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:13:31.947736 kernel: GICv3: 16 PPIs implemented Dec 13 14:13:31.947751 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 14:13:31.947766 kernel: ACPI: SRAT not present Dec 13 14:13:31.947781 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 14:13:31.947796 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:13:31.947812 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Dec 13 14:13:31.947827 kernel: GICv3: using LPI property table @0x00000004000b0000 Dec 13 14:13:31.947842 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 14:13:31.947861 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Dec 13 14:13:31.947876 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 14:13:31.947892 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 14:13:31.947907 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 14:13:31.947923 kernel: Console: colour dummy device 80x25 Dec 13 14:13:31.947938 kernel: printk: console [tty1] enabled Dec 13 14:13:31.947954 kernel: ACPI: Core revision 20210730 Dec 13 14:13:31.947970 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 14:13:31.947985 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:13:31.948001 kernel: LSM: Security Framework initializing Dec 13 14:13:31.948020 kernel: SELinux: Initializing. Dec 13 14:13:31.948036 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:13:31.948052 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:13:31.948067 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:13:31.948083 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 14:13:31.948098 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 14:13:31.948114 kernel: Remapping and enabling EFI services. Dec 13 14:13:31.948129 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:13:31.948145 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:13:31.948160 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 14:13:31.948180 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Dec 13 14:13:31.948196 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 14:13:31.948211 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:13:31.948227 kernel: SMP: Total of 2 processors activated. Dec 13 14:13:31.948242 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:13:31.948257 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 14:13:31.948273 kernel: CPU features: detected: CRC32 instructions Dec 13 14:13:31.948288 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:13:31.948303 kernel: alternatives: patching kernel code Dec 13 14:13:31.948338 kernel: devtmpfs: initialized Dec 13 14:13:31.948355 kernel: KASLR disabled due to lack of seed Dec 13 14:13:31.948381 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:13:31.948402 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:13:31.948418 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:13:31.948434 kernel: SMBIOS 3.0.0 present. Dec 13 14:13:31.948450 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 14:13:31.948466 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:13:31.948482 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:13:31.948498 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:13:31.948515 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:13:31.948535 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:13:31.948567 kernel: audit: type=2000 audit(0.247:1): state=initialized audit_enabled=0 res=1 Dec 13 14:13:31.948589 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:13:31.948605 kernel: cpuidle: using governor menu Dec 13 14:13:31.948621 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:13:31.948642 kernel: ASID allocator initialised with 32768 entries Dec 13 14:13:31.948659 kernel: ACPI: bus type PCI registered Dec 13 14:13:31.948675 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:13:31.948691 kernel: Serial: AMBA PL011 UART driver Dec 13 14:13:31.948707 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:13:31.948724 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:13:31.948740 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:13:31.948756 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:13:31.948772 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:13:31.948792 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:13:31.948808 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:13:31.948824 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:13:31.948840 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:13:31.948856 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:13:31.948872 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:13:31.948888 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:13:31.948904 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:13:31.948920 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:13:31.948940 kernel: ACPI: Interpreter enabled Dec 13 14:13:31.948956 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:13:31.948972 kernel: ACPI: MCFG table detected, 1 entries Dec 13 14:13:31.949004 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 14:13:31.949274 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:13:31.949470 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:13:31.949679 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:13:31.949866 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 14:13:31.950056 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 14:13:31.950078 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 14:13:31.950095 kernel: acpiphp: Slot [1] registered Dec 13 14:13:31.950112 kernel: acpiphp: Slot [2] registered Dec 13 14:13:31.950128 kernel: acpiphp: Slot [3] registered Dec 13 14:13:31.950144 kernel: acpiphp: Slot [4] registered Dec 13 14:13:31.950160 kernel: acpiphp: Slot [5] registered Dec 13 14:13:31.950176 kernel: acpiphp: Slot [6] registered Dec 13 14:13:31.950192 kernel: acpiphp: Slot [7] registered Dec 13 14:13:31.950212 kernel: acpiphp: Slot [8] registered Dec 13 14:13:31.950228 kernel: acpiphp: Slot [9] registered Dec 13 14:13:31.950244 kernel: acpiphp: Slot [10] registered Dec 13 14:13:31.950260 kernel: acpiphp: Slot [11] registered Dec 13 14:13:31.950276 kernel: acpiphp: Slot [12] registered Dec 13 14:13:31.955663 kernel: acpiphp: Slot [13] registered Dec 13 14:13:31.955682 kernel: acpiphp: Slot [14] registered Dec 13 14:13:31.955699 kernel: acpiphp: Slot [15] registered Dec 13 14:13:31.955716 kernel: acpiphp: Slot [16] registered Dec 13 14:13:31.955742 kernel: acpiphp: Slot [17] registered Dec 13 14:13:31.955759 kernel: acpiphp: Slot [18] registered Dec 13 14:13:31.955775 kernel: acpiphp: Slot [19] registered Dec 13 14:13:31.955791 kernel: acpiphp: Slot [20] registered Dec 13 14:13:31.955807 kernel: acpiphp: Slot [21] registered Dec 13 14:13:31.955823 kernel: acpiphp: Slot [22] registered Dec 13 14:13:31.955839 kernel: acpiphp: Slot [23] registered Dec 13 14:13:31.955855 kernel: acpiphp: Slot [24] registered Dec 13 14:13:31.955872 kernel: acpiphp: Slot [25] registered Dec 13 14:13:31.955888 kernel: acpiphp: Slot [26] registered Dec 13 14:13:31.955908 kernel: acpiphp: Slot [27] registered Dec 13 14:13:31.955925 kernel: acpiphp: Slot [28] registered Dec 13 14:13:31.955942 kernel: acpiphp: Slot [29] registered Dec 13 14:13:31.955958 kernel: acpiphp: Slot [30] registered Dec 13 14:13:31.955974 kernel: acpiphp: Slot [31] registered Dec 13 14:13:31.955990 kernel: PCI host bridge to bus 0000:00 Dec 13 14:13:31.956227 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 14:13:31.957052 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 14:13:31.957268 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 14:13:31.957468 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 14:13:31.957744 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 14:13:31.957995 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 14:13:31.958216 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 14:13:31.958445 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 14:13:31.958691 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 14:13:31.958911 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:13:31.959139 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 14:13:31.959362 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 14:13:31.959602 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 14:13:31.959827 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 14:13:31.960045 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:13:31.960263 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 14:13:31.960484 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 14:13:31.960752 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 14:13:31.960978 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 14:13:31.961223 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 14:13:31.961431 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 14:13:31.961655 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 14:13:31.961861 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 14:13:31.961884 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 14:13:31.961901 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 14:13:31.961918 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 14:13:31.961934 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 14:13:31.961951 kernel: iommu: Default domain type: Translated Dec 13 14:13:31.961967 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:13:31.961983 kernel: vgaarb: loaded Dec 13 14:13:31.961999 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:13:31.962021 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:13:31.962037 kernel: PTP clock support registered Dec 13 14:13:31.962053 kernel: Registered efivars operations Dec 13 14:13:31.962070 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:13:31.962086 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:13:31.962102 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:13:31.962119 kernel: pnp: PnP ACPI init Dec 13 14:13:31.962342 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 14:13:31.962370 kernel: pnp: PnP ACPI: found 1 devices Dec 13 14:13:31.962387 kernel: NET: Registered PF_INET protocol family Dec 13 14:13:31.962404 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:13:31.962420 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:13:31.962437 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:13:31.962453 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:13:31.962470 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:13:31.962486 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:13:31.962503 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:13:31.962523 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:13:31.962540 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:13:31.962573 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:13:31.962592 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 14:13:31.962608 kernel: kvm [1]: HYP mode not available Dec 13 14:13:31.962625 kernel: Initialise system trusted keyrings Dec 13 14:13:31.962642 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:13:31.962658 kernel: Key type asymmetric registered Dec 13 14:13:31.962674 kernel: Asymmetric key parser 'x509' registered Dec 13 14:13:31.962695 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:13:31.962712 kernel: io scheduler mq-deadline registered Dec 13 14:13:31.962728 kernel: io scheduler kyber registered Dec 13 14:13:31.962744 kernel: io scheduler bfq registered Dec 13 14:13:31.977843 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 14:13:31.977892 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 14:13:31.977910 kernel: ACPI: button: Power Button [PWRB] Dec 13 14:13:31.977927 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 14:13:31.977953 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 14:13:31.977970 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:13:31.977987 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 14:13:31.978225 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 14:13:31.978250 kernel: printk: console [ttyS0] disabled Dec 13 14:13:31.978267 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 14:13:31.978284 kernel: printk: console [ttyS0] enabled Dec 13 14:13:31.978301 kernel: printk: bootconsole [uart0] disabled Dec 13 14:13:31.978317 kernel: thunder_xcv, ver 1.0 Dec 13 14:13:31.978333 kernel: thunder_bgx, ver 1.0 Dec 13 14:13:31.978354 kernel: nicpf, ver 1.0 Dec 13 14:13:31.978370 kernel: nicvf, ver 1.0 Dec 13 14:13:31.978610 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:13:31.978827 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:13:31 UTC (1734099211) Dec 13 14:13:31.978852 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:13:31.978869 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:13:31.978885 kernel: Segment Routing with IPv6 Dec 13 14:13:31.978902 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:13:31.978925 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:13:31.978941 kernel: Key type dns_resolver registered Dec 13 14:13:31.978957 kernel: registered taskstats version 1 Dec 13 14:13:31.978974 kernel: Loading compiled-in X.509 certificates Dec 13 14:13:31.978992 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 14:13:31.979009 kernel: Key type .fscrypt registered Dec 13 14:13:31.979025 kernel: Key type fscrypt-provisioning registered Dec 13 14:13:31.979042 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:13:31.979058 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:13:31.979078 kernel: ima: No architecture policies found Dec 13 14:13:31.979094 kernel: clk: Disabling unused clocks Dec 13 14:13:31.979111 kernel: Freeing unused kernel memory: 36416K Dec 13 14:13:31.979127 kernel: Run /init as init process Dec 13 14:13:31.979143 kernel: with arguments: Dec 13 14:13:31.979159 kernel: /init Dec 13 14:13:31.979175 kernel: with environment: Dec 13 14:13:31.979190 kernel: HOME=/ Dec 13 14:13:31.979206 kernel: TERM=linux Dec 13 14:13:31.979226 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:13:31.979248 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:13:31.979269 systemd[1]: Detected virtualization amazon. Dec 13 14:13:31.979288 systemd[1]: Detected architecture arm64. Dec 13 14:13:31.979305 systemd[1]: Running in initrd. Dec 13 14:13:31.979322 systemd[1]: No hostname configured, using default hostname. Dec 13 14:13:31.979339 systemd[1]: Hostname set to . Dec 13 14:13:31.979362 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:13:31.979380 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:13:31.979397 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:13:31.979415 systemd[1]: Reached target cryptsetup.target. Dec 13 14:13:31.979432 systemd[1]: Reached target paths.target. Dec 13 14:13:31.979450 systemd[1]: Reached target slices.target. Dec 13 14:13:31.979495 systemd[1]: Reached target swap.target. Dec 13 14:13:31.979517 systemd[1]: Reached target timers.target. Dec 13 14:13:31.979541 systemd[1]: Listening on iscsid.socket. Dec 13 14:13:31.979579 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:13:31.979599 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:13:31.979617 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:13:31.979635 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:13:31.979653 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:13:31.979671 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:13:31.979688 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:13:31.979711 systemd[1]: Reached target sockets.target. Dec 13 14:13:31.979730 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:13:31.979747 systemd[1]: Finished network-cleanup.service. Dec 13 14:13:31.979765 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:13:31.979782 systemd[1]: Starting systemd-journald.service... Dec 13 14:13:31.979800 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:13:31.979817 systemd[1]: Starting systemd-resolved.service... Dec 13 14:13:31.979835 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:13:31.979853 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:13:31.979874 kernel: audit: type=1130 audit(1734099211.942:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:31.979893 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:13:31.979910 kernel: audit: type=1130 audit(1734099211.954:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:31.979928 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:13:31.979946 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:13:31.979963 kernel: audit: type=1130 audit(1734099211.966:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:31.979984 systemd-journald[309]: Journal started Dec 13 14:13:31.980074 systemd-journald[309]: Runtime Journal (/run/log/journal/ec2fba501b74534b2b0c9fa8bef60c71) is 8.0M, max 75.4M, 67.4M free. Dec 13 14:13:31.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:31.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:31.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:31.941309 systemd-modules-load[310]: Inserted module 'overlay' Dec 13 14:13:32.005679 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:13:32.005718 systemd[1]: Started systemd-journald.service. Dec 13 14:13:32.005742 kernel: audit: type=1130 audit(1734099211.995:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:31.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:32.025850 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:13:32.040451 kernel: audit: type=1130 audit(1734099212.024:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:32.040497 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:13:32.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:32.053041 systemd-modules-load[310]: Inserted module 'br_netfilter' Dec 13 14:13:32.054729 kernel: Bridge firewalling registered Dec 13 14:13:32.057531 systemd-resolved[311]: Positive Trust Anchors: Dec 13 14:13:32.060736 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:13:32.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:32.063844 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:13:32.066849 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:13:32.094074 kernel: audit: type=1130 audit(1734099212.063:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:32.094357 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:13:32.113160 kernel: SCSI subsystem initialized Dec 13 14:13:32.121563 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:13:32.121629 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:13:32.128030 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:13:32.133184 dracut-cmdline[326]: dracut-dracut-053 Dec 13 14:13:32.136629 systemd-modules-load[310]: Inserted module 'dm_multipath' Dec 13 14:13:32.139810 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:13:32.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:32.150749 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:13:32.161205 kernel: audit: type=1130 audit(1734099212.139:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:32.150337 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:13:32.183004 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:13:32.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:32.192591 kernel: audit: type=1130 audit(1734099212.184:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:32.275581 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:13:32.294591 kernel: iscsi: registered transport (tcp) Dec 13 14:13:32.321105 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:13:32.321175 kernel: QLogic iSCSI HBA Driver Dec 13 14:13:32.529383 systemd-resolved[311]: Defaulting to hostname 'linux'. Dec 13 14:13:32.531715 kernel: random: crng init done Dec 13 14:13:32.533499 systemd[1]: Started systemd-resolved.service. Dec 13 14:13:32.544782 kernel: audit: type=1130 audit(1734099212.533:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:32.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:32.535315 systemd[1]: Reached target nss-lookup.target. Dec 13 14:13:32.557069 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:13:32.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:32.561439 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:13:32.627604 kernel: raid6: neonx8 gen() 6210 MB/s Dec 13 14:13:32.643585 kernel: raid6: neonx8 xor() 4634 MB/s Dec 13 14:13:32.661583 kernel: raid6: neonx4 gen() 6578 MB/s Dec 13 14:13:32.679585 kernel: raid6: neonx4 xor() 4823 MB/s Dec 13 14:13:32.697583 kernel: raid6: neonx2 gen() 5787 MB/s Dec 13 14:13:32.715584 kernel: raid6: neonx2 xor() 4435 MB/s Dec 13 14:13:32.733583 kernel: raid6: neonx1 gen() 4488 MB/s Dec 13 14:13:32.751583 kernel: raid6: neonx1 xor() 3613 MB/s Dec 13 14:13:32.769583 kernel: raid6: int64x8 gen() 3445 MB/s Dec 13 14:13:32.787584 kernel: raid6: int64x8 xor() 2066 MB/s Dec 13 14:13:32.805583 kernel: raid6: int64x4 gen() 3848 MB/s Dec 13 14:13:32.823584 kernel: raid6: int64x4 xor() 2173 MB/s Dec 13 14:13:32.841583 kernel: raid6: int64x2 gen() 3615 MB/s Dec 13 14:13:32.859583 kernel: raid6: int64x2 xor() 1929 MB/s Dec 13 14:13:32.877584 kernel: raid6: int64x1 gen() 2761 MB/s Dec 13 14:13:32.896683 kernel: raid6: int64x1 xor() 1403 MB/s Dec 13 14:13:32.896713 kernel: raid6: using algorithm neonx4 gen() 6578 MB/s Dec 13 14:13:32.896737 kernel: raid6: .... xor() 4823 MB/s, rmw enabled Dec 13 14:13:32.898299 kernel: raid6: using neon recovery algorithm Dec 13 14:13:32.917714 kernel: xor: measuring software checksum speed Dec 13 14:13:32.917775 kernel: 8regs : 9104 MB/sec Dec 13 14:13:32.919413 kernel: 32regs : 11102 MB/sec Dec 13 14:13:32.922842 kernel: arm64_neon : 9086 MB/sec Dec 13 14:13:32.922874 kernel: xor: using function: 32regs (11102 MB/sec) Dec 13 14:13:33.012598 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 14:13:33.029357 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:13:33.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:33.031000 audit: BPF prog-id=7 op=LOAD Dec 13 14:13:33.031000 audit: BPF prog-id=8 op=LOAD Dec 13 14:13:33.033796 systemd[1]: Starting systemd-udevd.service... Dec 13 14:13:33.060213 systemd-udevd[509]: Using default interface naming scheme 'v252'. Dec 13 14:13:33.070283 systemd[1]: Started systemd-udevd.service. Dec 13 14:13:33.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:33.078378 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:13:33.104790 dracut-pre-trigger[520]: rd.md=0: removing MD RAID activation Dec 13 14:13:33.165824 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:13:33.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:33.168848 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:13:33.270168 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:13:33.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:33.401570 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 14:13:33.401650 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 14:13:33.424194 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 14:13:33.424225 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 14:13:33.424474 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 14:13:33.424712 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 14:13:33.424910 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 14:13:33.425137 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:61:ef:07:31:53 Dec 13 14:13:33.425336 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:13:33.425360 kernel: GPT:9289727 != 16777215 Dec 13 14:13:33.427673 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:13:33.427714 kernel: GPT:9289727 != 16777215 Dec 13 14:13:33.428793 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:13:33.431856 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:33.436654 (udev-worker)[576]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:13:33.511600 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (566) Dec 13 14:13:33.557789 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:13:33.613152 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:13:33.627584 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:13:33.639493 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:13:33.643960 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:13:33.657319 systemd[1]: Starting disk-uuid.service... Dec 13 14:13:33.667144 disk-uuid[674]: Primary Header is updated. Dec 13 14:13:33.667144 disk-uuid[674]: Secondary Entries is updated. Dec 13 14:13:33.667144 disk-uuid[674]: Secondary Header is updated. Dec 13 14:13:33.676588 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:33.685600 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:33.693599 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:34.692617 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:34.693121 disk-uuid[675]: The operation has completed successfully. Dec 13 14:13:34.861425 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:13:34.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:34.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:34.861654 systemd[1]: Finished disk-uuid.service. Dec 13 14:13:34.883704 systemd[1]: Starting verity-setup.service... Dec 13 14:13:34.913608 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:13:34.998056 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:13:35.002540 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:13:35.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:35.009619 systemd[1]: Finished verity-setup.service. Dec 13 14:13:35.097601 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:13:35.097972 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:13:35.099083 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:13:35.101888 systemd[1]: Starting ignition-setup.service... Dec 13 14:13:35.109277 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:13:35.141810 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:13:35.141869 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:13:35.143822 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:13:35.153585 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:13:35.174148 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:13:35.188451 systemd[1]: Finished ignition-setup.service. Dec 13 14:13:35.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:35.191915 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:13:35.267011 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:13:35.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:35.269000 audit: BPF prog-id=9 op=LOAD Dec 13 14:13:35.272296 systemd[1]: Starting systemd-networkd.service... Dec 13 14:13:35.320116 systemd-networkd[1187]: lo: Link UP Dec 13 14:13:35.320139 systemd-networkd[1187]: lo: Gained carrier Dec 13 14:13:35.323946 systemd-networkd[1187]: Enumeration completed Dec 13 14:13:35.325384 systemd[1]: Started systemd-networkd.service. Dec 13 14:13:35.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:35.326209 systemd[1]: Reached target network.target. Dec 13 14:13:35.328662 systemd[1]: Starting iscsiuio.service... Dec 13 14:13:35.339483 systemd-networkd[1187]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:13:35.344315 systemd-networkd[1187]: eth0: Link UP Dec 13 14:13:35.344324 systemd-networkd[1187]: eth0: Gained carrier Dec 13 14:13:35.347291 systemd[1]: Started iscsiuio.service. Dec 13 14:13:35.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:35.354629 systemd[1]: Starting iscsid.service... Dec 13 14:13:35.355899 systemd-networkd[1187]: eth0: DHCPv4 address 172.31.24.251/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:13:35.366026 iscsid[1192]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:13:35.366026 iscsid[1192]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:13:35.366026 iscsid[1192]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:13:35.366026 iscsid[1192]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:13:35.366026 iscsid[1192]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:13:35.384529 iscsid[1192]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:13:35.390003 systemd[1]: Started iscsid.service. Dec 13 14:13:35.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:35.393417 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:13:35.415532 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:13:35.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:35.418593 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:13:35.421529 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:13:35.424651 systemd[1]: Reached target remote-fs.target. Dec 13 14:13:35.429892 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:13:35.446997 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:13:35.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:35.940879 ignition[1121]: Ignition 2.14.0 Dec 13 14:13:35.940907 ignition[1121]: Stage: fetch-offline Dec 13 14:13:35.941293 ignition[1121]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:35.941361 ignition[1121]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:35.961074 ignition[1121]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:35.962889 ignition[1121]: Ignition finished successfully Dec 13 14:13:35.965804 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:13:35.977889 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 13 14:13:35.978047 kernel: audit: type=1130 audit(1734099215.965:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:35.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:35.972100 systemd[1]: Starting ignition-fetch.service... Dec 13 14:13:35.987906 ignition[1212]: Ignition 2.14.0 Dec 13 14:13:35.987933 ignition[1212]: Stage: fetch Dec 13 14:13:35.988233 ignition[1212]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:35.988291 ignition[1212]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:36.002998 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:36.007441 ignition[1212]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:36.040503 ignition[1212]: INFO : PUT result: OK Dec 13 14:13:36.053165 ignition[1212]: DEBUG : parsed url from cmdline: "" Dec 13 14:13:36.054974 ignition[1212]: INFO : no config URL provided Dec 13 14:13:36.054974 ignition[1212]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:13:36.054974 ignition[1212]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 14:13:36.054974 ignition[1212]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:36.062784 ignition[1212]: INFO : PUT result: OK Dec 13 14:13:36.062784 ignition[1212]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 14:13:36.066840 ignition[1212]: INFO : GET result: OK Dec 13 14:13:36.068323 ignition[1212]: DEBUG : parsing config with SHA512: eb499bdc2d115e0b183223bf1e0b5e4655a3b2110ac6cd35822925ef7639e20315588d2d3ef91d922350dec38e892cd14ae5b652cf79e9fe254e96abdaf9cb7c Dec 13 14:13:36.078170 unknown[1212]: fetched base config from "system" Dec 13 14:13:36.078748 unknown[1212]: fetched base config from "system" Dec 13 14:13:36.078776 unknown[1212]: fetched user config from "aws" Dec 13 14:13:36.083234 ignition[1212]: fetch: fetch complete Dec 13 14:13:36.083366 ignition[1212]: fetch: fetch passed Dec 13 14:13:36.083454 ignition[1212]: Ignition finished successfully Dec 13 14:13:36.089533 systemd[1]: Finished ignition-fetch.service. Dec 13 14:13:36.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.093784 systemd[1]: Starting ignition-kargs.service... Dec 13 14:13:36.102608 kernel: audit: type=1130 audit(1734099216.090:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.115638 ignition[1218]: Ignition 2.14.0 Dec 13 14:13:36.115664 ignition[1218]: Stage: kargs Dec 13 14:13:36.115965 ignition[1218]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:36.116018 ignition[1218]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:36.130132 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:36.132955 ignition[1218]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:36.135897 ignition[1218]: INFO : PUT result: OK Dec 13 14:13:36.140457 ignition[1218]: kargs: kargs passed Dec 13 14:13:36.140745 ignition[1218]: Ignition finished successfully Dec 13 14:13:36.144834 systemd[1]: Finished ignition-kargs.service. Dec 13 14:13:36.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.156034 kernel: audit: type=1130 audit(1734099216.144:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.154016 systemd[1]: Starting ignition-disks.service... Dec 13 14:13:36.168827 ignition[1224]: Ignition 2.14.0 Dec 13 14:13:36.170435 ignition[1224]: Stage: disks Dec 13 14:13:36.171881 ignition[1224]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:36.174073 ignition[1224]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:36.184824 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:36.187275 ignition[1224]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:36.190575 ignition[1224]: INFO : PUT result: OK Dec 13 14:13:36.196397 ignition[1224]: disks: disks passed Dec 13 14:13:36.196500 ignition[1224]: Ignition finished successfully Dec 13 14:13:36.200794 systemd[1]: Finished ignition-disks.service. Dec 13 14:13:36.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.203845 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:13:36.230740 kernel: audit: type=1130 audit(1734099216.202:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.211663 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:13:36.213279 systemd[1]: Reached target local-fs.target. Dec 13 14:13:36.214767 systemd[1]: Reached target sysinit.target. Dec 13 14:13:36.216211 systemd[1]: Reached target basic.target. Dec 13 14:13:36.219001 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:13:36.266662 systemd-fsck[1232]: ROOT: clean, 621/553520 files, 56020/553472 blocks Dec 13 14:13:36.274491 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:13:36.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.278991 systemd[1]: Mounting sysroot.mount... Dec 13 14:13:36.289883 kernel: audit: type=1130 audit(1734099216.275:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.303769 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:13:36.304827 systemd[1]: Mounted sysroot.mount. Dec 13 14:13:36.307442 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:13:36.327108 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:13:36.329216 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:13:36.329292 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:13:36.329344 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:13:36.344354 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:13:36.366454 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:13:36.373757 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:13:36.389901 initrd-setup-root[1254]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:13:36.398613 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1249) Dec 13 14:13:36.404628 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:13:36.404677 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:13:36.406571 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:13:36.409417 initrd-setup-root[1269]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:13:36.418219 initrd-setup-root[1286]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:13:36.426411 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:13:36.449599 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:13:36.460610 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:13:36.609530 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:13:36.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.613868 systemd[1]: Starting ignition-mount.service... Dec 13 14:13:36.622313 kernel: audit: type=1130 audit(1734099216.611:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.623522 systemd[1]: Starting sysroot-boot.service... Dec 13 14:13:36.632447 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:13:36.632640 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:13:36.657242 ignition[1314]: INFO : Ignition 2.14.0 Dec 13 14:13:36.659045 ignition[1314]: INFO : Stage: mount Dec 13 14:13:36.660515 ignition[1314]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:36.662842 ignition[1314]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:36.675044 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:36.677430 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:36.680898 ignition[1314]: INFO : PUT result: OK Dec 13 14:13:36.686528 ignition[1314]: INFO : mount: mount passed Dec 13 14:13:36.688156 ignition[1314]: INFO : Ignition finished successfully Dec 13 14:13:36.691138 systemd[1]: Finished ignition-mount.service. Dec 13 14:13:36.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.702614 systemd[1]: Starting ignition-files.service... Dec 13 14:13:36.713825 kernel: audit: type=1130 audit(1734099216.692:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.718708 systemd[1]: Finished sysroot-boot.service. Dec 13 14:13:36.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.728585 kernel: audit: type=1130 audit(1734099216.719:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.730479 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:13:36.753597 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1324) Dec 13 14:13:36.758534 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:13:36.758590 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:13:36.758616 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:13:36.772587 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:13:36.777718 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:13:36.796588 ignition[1343]: INFO : Ignition 2.14.0 Dec 13 14:13:36.796588 ignition[1343]: INFO : Stage: files Dec 13 14:13:36.800700 ignition[1343]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:36.800700 ignition[1343]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:36.813161 ignition[1343]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:36.815740 ignition[1343]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:36.818726 ignition[1343]: INFO : PUT result: OK Dec 13 14:13:36.823625 ignition[1343]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:13:36.827235 ignition[1343]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:13:36.827235 ignition[1343]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:13:36.857731 ignition[1343]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:13:36.860599 ignition[1343]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:13:36.864684 unknown[1343]: wrote ssh authorized keys file for user: core Dec 13 14:13:36.866765 ignition[1343]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:13:36.875946 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:13:36.879902 ignition[1343]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 14:13:36.968852 systemd-networkd[1187]: eth0: Gained IPv6LL Dec 13 14:13:36.974347 ignition[1343]: INFO : GET result: OK Dec 13 14:13:37.139862 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:13:37.143449 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:13:37.143449 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:13:37.143449 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:13:37.143449 ignition[1343]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:37.166417 ignition[1343]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem986725903" Dec 13 14:13:37.166417 ignition[1343]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem986725903": device or resource busy Dec 13 14:13:37.166417 ignition[1343]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem986725903", trying btrfs: device or resource busy Dec 13 14:13:37.166417 ignition[1343]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem986725903" Dec 13 14:13:37.178044 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1348) Dec 13 14:13:37.178089 ignition[1343]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem986725903" Dec 13 14:13:37.185053 ignition[1343]: INFO : op(3): [started] unmounting "/mnt/oem986725903" Dec 13 14:13:37.185053 ignition[1343]: INFO : op(3): [finished] unmounting "/mnt/oem986725903" Dec 13 14:13:37.185053 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:13:37.193226 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:13:37.193226 ignition[1343]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 14:13:37.684896 ignition[1343]: INFO : GET result: OK Dec 13 14:13:37.861792 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:13:37.865093 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:13:37.868635 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:13:37.868635 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:13:37.868635 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:13:37.868635 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:13:37.881060 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:13:37.884427 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:13:37.887788 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:13:37.890929 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:37.895457 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:37.905116 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:13:37.908633 ignition[1343]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:37.918223 ignition[1343]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3372987862" Dec 13 14:13:37.920931 ignition[1343]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3372987862": device or resource busy Dec 13 14:13:37.920931 ignition[1343]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3372987862", trying btrfs: device or resource busy Dec 13 14:13:37.920931 ignition[1343]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3372987862" Dec 13 14:13:37.920931 ignition[1343]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3372987862" Dec 13 14:13:37.936695 ignition[1343]: INFO : op(6): [started] unmounting "/mnt/oem3372987862" Dec 13 14:13:37.936695 ignition[1343]: INFO : op(6): [finished] unmounting "/mnt/oem3372987862" Dec 13 14:13:37.936695 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:13:37.936695 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:13:37.936695 ignition[1343]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:37.940887 systemd[1]: mnt-oem3372987862.mount: Deactivated successfully. Dec 13 14:13:37.966485 ignition[1343]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1151789672" Dec 13 14:13:37.969360 ignition[1343]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1151789672": device or resource busy Dec 13 14:13:37.969360 ignition[1343]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1151789672", trying btrfs: device or resource busy Dec 13 14:13:37.969360 ignition[1343]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1151789672" Dec 13 14:13:37.969360 ignition[1343]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1151789672" Dec 13 14:13:37.981092 ignition[1343]: INFO : op(9): [started] unmounting "/mnt/oem1151789672" Dec 13 14:13:37.981092 ignition[1343]: INFO : op(9): [finished] unmounting "/mnt/oem1151789672" Dec 13 14:13:37.989184 systemd[1]: mnt-oem1151789672.mount: Deactivated successfully. Dec 13 14:13:37.994448 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:13:37.997767 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:37.997767 ignition[1343]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 14:13:38.474485 ignition[1343]: INFO : GET result: OK Dec 13 14:13:38.880267 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:38.888583 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:13:38.888583 ignition[1343]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:38.900214 ignition[1343]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2163229670" Dec 13 14:13:38.902854 ignition[1343]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2163229670": device or resource busy Dec 13 14:13:38.902854 ignition[1343]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2163229670", trying btrfs: device or resource busy Dec 13 14:13:38.902854 ignition[1343]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2163229670" Dec 13 14:13:38.912412 ignition[1343]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2163229670" Dec 13 14:13:38.912412 ignition[1343]: INFO : op(c): [started] unmounting "/mnt/oem2163229670" Dec 13 14:13:38.918721 ignition[1343]: INFO : op(c): [finished] unmounting "/mnt/oem2163229670" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(13): [started] processing unit "nvidia.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(13): [finished] processing unit "nvidia.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(17): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(17): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:13:38.918721 ignition[1343]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:13:38.974597 ignition[1343]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:13:38.987275 ignition[1343]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:13:38.987275 ignition[1343]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:13:38.987275 ignition[1343]: INFO : files: files passed Dec 13 14:13:38.987275 ignition[1343]: INFO : Ignition finished successfully Dec 13 14:13:38.997047 systemd[1]: Finished ignition-files.service. Dec 13 14:13:38.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.007606 kernel: audit: type=1130 audit(1734099218.998:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.010740 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:13:39.012733 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:13:39.016511 systemd[1]: Starting ignition-quench.service... Dec 13 14:13:39.030346 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:13:39.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.030660 systemd[1]: Finished ignition-quench.service. Dec 13 14:13:39.044005 kernel: audit: type=1130 audit(1734099219.032:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.046272 initrd-setup-root-after-ignition[1368]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:13:39.050811 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:13:39.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.051921 systemd[1]: Reached target ignition-complete.target. Dec 13 14:13:39.054060 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:13:39.086964 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:13:39.088832 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:13:39.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.092107 systemd[1]: Reached target initrd-fs.target. Dec 13 14:13:39.094978 systemd[1]: Reached target initrd.target. Dec 13 14:13:39.097813 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:13:39.101658 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:13:39.124207 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:13:39.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.127371 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:13:39.148215 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:13:39.151521 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:13:39.155012 systemd[1]: Stopped target timers.target. Dec 13 14:13:39.157998 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:13:39.160058 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:13:39.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.163790 systemd[1]: Stopped target initrd.target. Dec 13 14:13:39.166624 systemd[1]: Stopped target basic.target. Dec 13 14:13:39.168183 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:13:39.173035 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:13:39.175739 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:13:39.182390 systemd[1]: Stopped target remote-fs.target. Dec 13 14:13:39.185449 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:13:39.188066 systemd[1]: Stopped target sysinit.target. Dec 13 14:13:39.190026 systemd[1]: Stopped target local-fs.target. Dec 13 14:13:39.196729 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:13:39.199897 systemd[1]: Stopped target swap.target. Dec 13 14:13:39.202689 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:13:39.204772 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:13:39.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.210355 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:13:39.214833 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:13:39.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.215079 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:13:39.221213 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:13:39.222702 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:13:39.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.227090 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:13:39.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.227362 systemd[1]: Stopped ignition-files.service. Dec 13 14:13:39.231778 systemd[1]: Stopping ignition-mount.service... Dec 13 14:13:39.259441 iscsid[1192]: iscsid shutting down. Dec 13 14:13:39.252450 systemd[1]: Stopping iscsid.service... Dec 13 14:13:39.261403 ignition[1381]: INFO : Ignition 2.14.0 Dec 13 14:13:39.261403 ignition[1381]: INFO : Stage: umount Dec 13 14:13:39.261403 ignition[1381]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:39.261403 ignition[1381]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:39.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.262890 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:13:39.263160 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:13:39.296527 ignition[1381]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:39.296527 ignition[1381]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:39.291251 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:13:39.307194 ignition[1381]: INFO : PUT result: OK Dec 13 14:13:39.307663 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:13:39.308086 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:13:39.311254 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:13:39.317135 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:13:39.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.324073 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:13:39.324284 systemd[1]: Stopped iscsid.service. Dec 13 14:13:39.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.329950 systemd[1]: Stopping iscsiuio.service... Dec 13 14:13:39.340161 ignition[1381]: INFO : umount: umount passed Dec 13 14:13:39.341908 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:13:39.343631 systemd[1]: Stopped iscsiuio.service. Dec 13 14:13:39.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.352968 ignition[1381]: INFO : Ignition finished successfully Dec 13 14:13:39.346294 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:13:39.346468 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:13:39.348665 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:13:39.348854 systemd[1]: Stopped ignition-mount.service. Dec 13 14:13:39.352838 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:13:39.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.353523 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:13:39.353671 systemd[1]: Stopped ignition-disks.service. Dec 13 14:13:39.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.364129 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:13:39.364231 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:13:39.382439 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:13:39.382538 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:13:39.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.393302 systemd[1]: Stopped target network.target. Dec 13 14:13:39.396023 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:13:39.396145 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:13:39.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.405117 systemd[1]: Stopped target paths.target. Dec 13 14:13:39.406702 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:13:39.414645 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:13:39.418069 systemd[1]: Stopped target slices.target. Dec 13 14:13:39.419545 systemd[1]: Stopped target sockets.target. Dec 13 14:13:39.423819 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:13:39.423947 systemd[1]: Closed iscsid.socket. Dec 13 14:13:39.428007 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:13:39.428099 systemd[1]: Closed iscsiuio.socket. Dec 13 14:13:39.430986 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:13:39.432225 systemd[1]: Stopped ignition-setup.service. Dec 13 14:13:39.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.437146 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:13:39.440322 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:13:39.440611 systemd-networkd[1187]: eth0: DHCPv6 lease lost Dec 13 14:13:39.445463 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:13:39.446720 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:13:39.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.456535 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:13:39.457153 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:13:39.462591 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:13:39.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.459000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:13:39.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.462666 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:13:39.464267 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:13:39.464369 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:13:39.473257 systemd[1]: Stopping network-cleanup.service... Dec 13 14:13:39.481735 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:13:39.481998 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:13:39.487104 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:13:39.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.487212 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:13:39.491237 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:13:39.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.492833 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:13:39.503407 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:13:39.507922 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:13:39.508882 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:13:39.509097 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:13:39.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.517000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:13:39.522031 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:13:39.524397 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:13:39.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.528338 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:13:39.530362 systemd[1]: Stopped network-cleanup.service. Dec 13 14:13:39.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.533848 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:13:39.536182 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:13:39.539319 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:13:39.539414 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:13:39.542732 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:13:39.546076 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:13:39.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.549016 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:13:39.549122 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:13:39.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.553711 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:13:39.553793 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:13:39.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.559990 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:13:39.573157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:13:39.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.573277 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:13:39.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.579381 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:13:39.579708 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:13:39.582654 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:13:39.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.592079 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:13:39.608711 systemd[1]: Switching root. Dec 13 14:13:39.635641 systemd-journald[309]: Journal stopped Dec 13 14:13:45.524971 systemd-journald[309]: Received SIGTERM from PID 1 (systemd). Dec 13 14:13:45.525447 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:13:45.525507 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:13:45.525540 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:13:45.525600 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:13:45.525643 kernel: SELinux: policy capability open_perms=1 Dec 13 14:13:45.525676 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:13:45.525706 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:13:45.525870 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:13:45.525919 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:13:45.525954 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:13:45.525985 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:13:45.526027 systemd[1]: Successfully loaded SELinux policy in 114.603ms. Dec 13 14:13:45.526083 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.521ms. Dec 13 14:13:45.526130 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:13:45.526163 systemd[1]: Detected virtualization amazon. Dec 13 14:13:45.531409 systemd[1]: Detected architecture arm64. Dec 13 14:13:45.531446 systemd[1]: Detected first boot. Dec 13 14:13:45.531481 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:13:45.531514 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:13:45.531548 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:13:45.531988 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:13:45.532039 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:13:45.532073 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:13:45.532106 kernel: kauditd_printk_skb: 56 callbacks suppressed Dec 13 14:13:45.532142 kernel: audit: type=1334 audit(1734099225.124:88): prog-id=12 op=LOAD Dec 13 14:13:45.532174 kernel: audit: type=1334 audit(1734099225.124:89): prog-id=3 op=UNLOAD Dec 13 14:13:45.532206 kernel: audit: type=1334 audit(1734099225.124:90): prog-id=13 op=LOAD Dec 13 14:13:45.532244 kernel: audit: type=1334 audit(1734099225.124:91): prog-id=14 op=LOAD Dec 13 14:13:45.532275 kernel: audit: type=1334 audit(1734099225.124:92): prog-id=4 op=UNLOAD Dec 13 14:13:45.532305 kernel: audit: type=1334 audit(1734099225.124:93): prog-id=5 op=UNLOAD Dec 13 14:13:45.532335 kernel: audit: type=1334 audit(1734099225.130:94): prog-id=15 op=LOAD Dec 13 14:13:45.532367 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:13:45.532399 kernel: audit: type=1334 audit(1734099225.130:95): prog-id=12 op=UNLOAD Dec 13 14:13:45.532440 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:13:45.532470 kernel: audit: type=1334 audit(1734099225.132:96): prog-id=16 op=LOAD Dec 13 14:13:45.532505 kernel: audit: type=1334 audit(1734099225.134:97): prog-id=17 op=LOAD Dec 13 14:13:45.532539 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:13:45.532619 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:13:45.534902 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:13:45.535980 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:13:45.536039 systemd[1]: Created slice system-getty.slice. Dec 13 14:13:45.536074 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:13:45.536112 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:13:45.536152 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:13:45.536188 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:13:45.536222 systemd[1]: Created slice user.slice. Dec 13 14:13:45.536257 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:13:45.536291 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:13:45.536324 systemd[1]: Set up automount boot.automount. Dec 13 14:13:45.536355 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:13:45.536389 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:13:45.536421 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:13:45.536456 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:13:45.536488 systemd[1]: Reached target integritysetup.target. Dec 13 14:13:45.536519 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:13:45.536611 systemd[1]: Reached target remote-fs.target. Dec 13 14:13:45.542188 systemd[1]: Reached target slices.target. Dec 13 14:13:45.542436 systemd[1]: Reached target swap.target. Dec 13 14:13:45.542639 systemd[1]: Reached target torcx.target. Dec 13 14:13:45.542673 systemd[1]: Reached target veritysetup.target. Dec 13 14:13:45.542705 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:13:45.542743 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:13:45.542776 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:13:45.542807 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:13:45.542837 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:13:45.542867 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:13:45.542897 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:13:45.542926 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:13:45.542957 systemd[1]: Mounting media.mount... Dec 13 14:13:45.542987 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:13:45.543016 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:13:45.543050 systemd[1]: Mounting tmp.mount... Dec 13 14:13:45.543084 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:13:45.543116 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:13:45.543148 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:13:45.543179 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:13:45.543210 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:13:45.543243 systemd[1]: Starting modprobe@drm.service... Dec 13 14:13:45.543273 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:13:45.543303 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:13:45.543337 systemd[1]: Starting modprobe@loop.service... Dec 13 14:13:45.543369 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:13:45.543399 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:13:45.543429 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:13:45.543458 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:13:45.543489 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:13:45.543522 systemd[1]: Stopped systemd-journald.service. Dec 13 14:13:45.543589 systemd[1]: Starting systemd-journald.service... Dec 13 14:13:45.544175 kernel: loop: module loaded Dec 13 14:13:45.544215 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:13:45.544245 kernel: fuse: init (API version 7.34) Dec 13 14:13:45.544274 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:13:45.544305 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:13:45.544336 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:13:45.544367 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:13:45.544400 systemd[1]: Stopped verity-setup.service. Dec 13 14:13:45.544431 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:13:45.544460 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:13:45.544493 systemd[1]: Mounted media.mount. Dec 13 14:13:45.544525 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:13:45.544596 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:13:45.544634 systemd[1]: Mounted tmp.mount. Dec 13 14:13:45.544667 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:13:45.544698 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:13:45.544727 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:13:45.544757 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:13:45.544786 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:13:45.544976 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:13:45.545011 systemd[1]: Finished modprobe@drm.service. Dec 13 14:13:45.545041 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:13:45.545070 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:13:45.545100 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:13:45.545134 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:13:45.545166 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:13:45.545196 systemd[1]: Finished modprobe@loop.service. Dec 13 14:13:45.545228 systemd-journald[1493]: Journal started Dec 13 14:13:45.545346 systemd-journald[1493]: Runtime Journal (/run/log/journal/ec2fba501b74534b2b0c9fa8bef60c71) is 8.0M, max 75.4M, 67.4M free. Dec 13 14:13:40.525000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:13:40.715000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:13:40.715000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:13:40.715000 audit: BPF prog-id=10 op=LOAD Dec 13 14:13:40.715000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:13:40.716000 audit: BPF prog-id=11 op=LOAD Dec 13 14:13:40.716000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:13:40.948000 audit[1414]: AVC avc: denied { associate } for pid=1414 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:13:40.948000 audit[1414]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458ac a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1397 pid=1414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:13:45.551859 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:13:40.948000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:13:40.952000 audit[1414]: AVC avc: denied { associate } for pid=1414 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:13:40.952000 audit[1414]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145985 a2=1ed a3=0 items=2 ppid=1397 pid=1414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:13:40.952000 audit: CWD cwd="/" Dec 13 14:13:40.952000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:13:40.952000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:13:40.952000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:13:45.124000 audit: BPF prog-id=12 op=LOAD Dec 13 14:13:45.124000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:13:45.124000 audit: BPF prog-id=13 op=LOAD Dec 13 14:13:45.124000 audit: BPF prog-id=14 op=LOAD Dec 13 14:13:45.124000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:13:45.124000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:13:45.130000 audit: BPF prog-id=15 op=LOAD Dec 13 14:13:45.130000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:13:45.132000 audit: BPF prog-id=16 op=LOAD Dec 13 14:13:45.134000 audit: BPF prog-id=17 op=LOAD Dec 13 14:13:45.134000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:13:45.134000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:13:45.137000 audit: BPF prog-id=18 op=LOAD Dec 13 14:13:45.137000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:13:45.139000 audit: BPF prog-id=19 op=LOAD Dec 13 14:13:45.142000 audit: BPF prog-id=20 op=LOAD Dec 13 14:13:45.142000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:13:45.142000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:13:45.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.156000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:13:45.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.559370 systemd[1]: Started systemd-journald.service. Dec 13 14:13:45.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.408000 audit: BPF prog-id=21 op=LOAD Dec 13 14:13:45.409000 audit: BPF prog-id=22 op=LOAD Dec 13 14:13:45.409000 audit: BPF prog-id=23 op=LOAD Dec 13 14:13:45.409000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:13:45.409000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:13:45.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.515000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:13:45.515000 audit[1493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd91462b0 a2=4000 a3=1 items=0 ppid=1 pid=1493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:13:45.515000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:13:45.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.122860 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:13:45.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.946262 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:13:45.144325 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:13:40.947223 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:13:45.559301 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:13:40.947277 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:13:45.561603 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:13:40.947342 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:13:45.563970 systemd[1]: Reached target network-pre.target. Dec 13 14:13:40.947367 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:13:45.570383 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:13:40.947426 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:13:45.575257 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:13:40.947456 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:13:45.576738 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:13:40.947869 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:13:45.580301 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:13:40.947945 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:13:45.584141 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:13:40.947980 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:13:40.948950 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:13:40.949035 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:13:45.588229 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:13:40.949079 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:13:40.949118 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:13:40.949163 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:13:40.949201 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:13:44.178888 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:44Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:13:44.179433 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:44Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:13:45.590602 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:13:44.179745 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:44Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:13:44.180221 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:44Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:13:44.180342 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:44Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:13:44.180492 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T14:13:44Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:13:45.592788 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:13:45.595208 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:13:45.602914 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:13:45.607021 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:13:45.627696 systemd-journald[1493]: Time spent on flushing to /var/log/journal/ec2fba501b74534b2b0c9fa8bef60c71 is 60.536ms for 1145 entries. Dec 13 14:13:45.627696 systemd-journald[1493]: System Journal (/var/log/journal/ec2fba501b74534b2b0c9fa8bef60c71) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:13:45.708257 systemd-journald[1493]: Received client request to flush runtime journal. Dec 13 14:13:45.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.640597 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:13:45.642469 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:13:45.667081 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:13:45.702213 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:13:45.706225 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:13:45.709982 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:13:45.758450 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:13:45.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.762493 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:13:45.777333 udevadm[1533]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:13:45.949063 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:13:45.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:46.522329 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:13:46.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:46.524000 audit: BPF prog-id=24 op=LOAD Dec 13 14:13:46.524000 audit: BPF prog-id=25 op=LOAD Dec 13 14:13:46.524000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:13:46.524000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:13:46.527811 systemd[1]: Starting systemd-udevd.service... Dec 13 14:13:46.568891 systemd-udevd[1534]: Using default interface naming scheme 'v252'. Dec 13 14:13:46.631526 systemd[1]: Started systemd-udevd.service. Dec 13 14:13:46.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:46.633000 audit: BPF prog-id=26 op=LOAD Dec 13 14:13:46.638970 systemd[1]: Starting systemd-networkd.service... Dec 13 14:13:46.657000 audit: BPF prog-id=27 op=LOAD Dec 13 14:13:46.657000 audit: BPF prog-id=28 op=LOAD Dec 13 14:13:46.657000 audit: BPF prog-id=29 op=LOAD Dec 13 14:13:46.660371 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:13:46.736236 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:13:46.740666 systemd[1]: Started systemd-userdbd.service. Dec 13 14:13:46.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:46.751064 (udev-worker)[1544]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:13:46.966344 systemd-networkd[1540]: lo: Link UP Dec 13 14:13:46.966372 systemd-networkd[1540]: lo: Gained carrier Dec 13 14:13:46.967485 systemd-networkd[1540]: Enumeration completed Dec 13 14:13:46.967717 systemd[1]: Started systemd-networkd.service. Dec 13 14:13:46.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:46.972131 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:13:46.974476 systemd-networkd[1540]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:13:46.980536 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1539) Dec 13 14:13:46.980715 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:13:46.981146 systemd-networkd[1540]: eth0: Link UP Dec 13 14:13:46.981652 systemd-networkd[1540]: eth0: Gained carrier Dec 13 14:13:46.988843 systemd-networkd[1540]: eth0: DHCPv4 address 172.31.24.251/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:13:47.164081 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:13:47.167840 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:13:47.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.171943 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:13:47.233205 lvm[1651]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:13:47.271107 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:13:47.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.273040 systemd[1]: Reached target cryptsetup.target. Dec 13 14:13:47.276854 systemd[1]: Starting lvm2-activation.service... Dec 13 14:13:47.285076 lvm[1652]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:13:47.321257 systemd[1]: Finished lvm2-activation.service. Dec 13 14:13:47.323078 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:13:47.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.324711 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:13:47.324752 systemd[1]: Reached target local-fs.target. Dec 13 14:13:47.326277 systemd[1]: Reached target machines.target. Dec 13 14:13:47.329927 systemd[1]: Starting ldconfig.service... Dec 13 14:13:47.332285 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:13:47.332421 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:47.334789 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:13:47.339315 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:13:47.343473 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:13:47.347444 systemd[1]: Starting systemd-sysext.service... Dec 13 14:13:47.370054 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1654 (bootctl) Dec 13 14:13:47.372465 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:13:47.390952 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:13:47.403981 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:13:47.404423 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:13:47.447263 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:13:47.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.453602 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 14:13:47.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.540808 systemd-fsck[1664]: fsck.fat 4.2 (2021-01-31) Dec 13 14:13:47.540808 systemd-fsck[1664]: /dev/nvme0n1p1: 236 files, 117175/258078 clusters Dec 13 14:13:47.538225 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:13:47.542660 systemd[1]: Mounting boot.mount... Dec 13 14:13:47.584233 systemd[1]: Mounted boot.mount. Dec 13 14:13:47.619591 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:13:47.621074 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:13:47.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.641600 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 14:13:47.653689 (sd-sysext)[1682]: Using extensions 'kubernetes'. Dec 13 14:13:47.656242 (sd-sysext)[1682]: Merged extensions into '/usr'. Dec 13 14:13:47.694208 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:13:47.695912 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:13:47.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.701127 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:13:47.703115 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:13:47.706354 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:13:47.714465 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:13:47.718939 systemd[1]: Starting modprobe@loop.service... Dec 13 14:13:47.720659 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:13:47.721022 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:47.727680 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:13:47.731023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:13:47.731430 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:13:47.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.734735 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:13:47.735150 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:13:47.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.738277 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:13:47.738820 systemd[1]: Finished modprobe@loop.service. Dec 13 14:13:47.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.742065 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:13:47.742268 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:13:47.744266 systemd[1]: Finished systemd-sysext.service. Dec 13 14:13:47.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.749283 systemd[1]: Starting ensure-sysext.service... Dec 13 14:13:47.753645 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:13:47.768747 systemd[1]: Reloading. Dec 13 14:13:47.837128 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:13:47.869211 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:13:47.905212 /usr/lib/systemd/system-generators/torcx-generator[1708]: time="2024-12-13T14:13:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:13:47.905312 /usr/lib/systemd/system-generators/torcx-generator[1708]: time="2024-12-13T14:13:47Z" level=info msg="torcx already run" Dec 13 14:13:47.907162 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:13:48.112441 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:13:48.112480 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:13:48.155702 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:13:48.311000 audit: BPF prog-id=30 op=LOAD Dec 13 14:13:48.311000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:13:48.311000 audit: BPF prog-id=31 op=LOAD Dec 13 14:13:48.311000 audit: BPF prog-id=32 op=LOAD Dec 13 14:13:48.311000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:13:48.311000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:13:48.314000 audit: BPF prog-id=33 op=LOAD Dec 13 14:13:48.314000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:13:48.317000 audit: BPF prog-id=34 op=LOAD Dec 13 14:13:48.317000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:13:48.317000 audit: BPF prog-id=35 op=LOAD Dec 13 14:13:48.317000 audit: BPF prog-id=36 op=LOAD Dec 13 14:13:48.317000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:13:48.317000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:13:48.321000 audit: BPF prog-id=37 op=LOAD Dec 13 14:13:48.321000 audit: BPF prog-id=38 op=LOAD Dec 13 14:13:48.321000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:13:48.322000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:13:48.354633 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:13:48.358057 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:13:48.362227 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:13:48.367591 systemd[1]: Starting modprobe@loop.service... Dec 13 14:13:48.369274 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:13:48.369694 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:48.372037 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:13:48.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.375232 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:13:48.375616 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:13:48.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.378452 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:13:48.378778 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:13:48.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.382043 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:13:48.382416 systemd[1]: Finished modprobe@loop.service. Dec 13 14:13:48.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.388240 systemd[1]: Starting audit-rules.service... Dec 13 14:13:48.392409 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:13:48.398804 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:13:48.402775 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:13:48.403087 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:13:48.403000 audit: BPF prog-id=39 op=LOAD Dec 13 14:13:48.409212 systemd[1]: Starting systemd-resolved.service... Dec 13 14:13:48.410000 audit: BPF prog-id=40 op=LOAD Dec 13 14:13:48.414526 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:13:48.419377 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:13:48.433983 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:13:48.440308 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:13:48.444792 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:13:48.450506 systemd[1]: Starting modprobe@loop.service... Dec 13 14:13:48.452234 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:13:48.452646 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:48.459675 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:13:48.465078 systemd[1]: Starting modprobe@drm.service... Dec 13 14:13:48.466922 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:13:48.467252 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:48.471243 systemd[1]: Finished ensure-sysext.service. Dec 13 14:13:48.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.489390 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:13:48.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.491349 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:13:48.490000 audit[1774]: SYSTEM_BOOT pid=1774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.496715 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:13:48.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.507748 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:13:48.508084 systemd[1]: Finished modprobe@drm.service. Dec 13 14:13:48.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.510744 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:13:48.511072 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:13:48.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.513469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:13:48.514392 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:13:48.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.516892 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:13:48.517235 systemd[1]: Finished modprobe@loop.service. Dec 13 14:13:48.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.519248 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:13:48.519350 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:13:48.559616 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:13:48.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.613000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:13:48.613000 audit[1792]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffffbdb050 a2=420 a3=0 items=0 ppid=1768 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:13:48.613000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:13:48.615959 augenrules[1792]: No rules Dec 13 14:13:48.617128 systemd[1]: Finished audit-rules.service. Dec 13 14:13:48.631111 systemd-resolved[1771]: Positive Trust Anchors: Dec 13 14:13:48.631139 systemd-resolved[1771]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:13:48.631194 systemd-resolved[1771]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:13:48.676404 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:13:48.678365 systemd[1]: Reached target time-set.target. Dec 13 14:13:48.687985 systemd-resolved[1771]: Defaulting to hostname 'linux'. Dec 13 14:13:48.691216 systemd[1]: Started systemd-resolved.service. Dec 13 14:13:48.692973 systemd[1]: Reached target network.target. Dec 13 14:13:48.694535 systemd[1]: Reached target nss-lookup.target. Dec 13 14:13:48.725370 ldconfig[1653]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:13:48.736823 systemd[1]: Finished ldconfig.service. Dec 13 14:13:48.740822 systemd[1]: Starting systemd-update-done.service... Dec 13 14:13:48.755716 systemd[1]: Finished systemd-update-done.service. Dec 13 14:13:48.757620 systemd[1]: Reached target sysinit.target. Dec 13 14:13:48.759384 systemd[1]: Started motdgen.path. Dec 13 14:13:48.761246 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:13:48.763547 systemd[1]: Started logrotate.timer. Dec 13 14:13:48.765141 systemd[1]: Started mdadm.timer. Dec 13 14:13:48.766508 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:13:48.768140 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:13:48.768199 systemd[1]: Reached target paths.target. Dec 13 14:13:48.769628 systemd[1]: Reached target timers.target. Dec 13 14:13:48.771637 systemd[1]: Listening on dbus.socket. Dec 13 14:13:48.775083 systemd[1]: Starting docker.socket... Dec 13 14:13:48.781696 systemd[1]: Listening on sshd.socket. Dec 13 14:13:48.784214 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:48.785414 systemd[1]: Listening on docker.socket. Dec 13 14:13:48.787277 systemd[1]: Reached target sockets.target. Dec 13 14:13:48.789008 systemd[1]: Reached target basic.target. Dec 13 14:13:48.790624 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:13:48.790816 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:13:48.805150 systemd[1]: Starting containerd.service... Dec 13 14:13:48.809174 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:13:48.814199 systemd[1]: Starting dbus.service... Dec 13 14:13:48.817800 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:13:48.821810 systemd[1]: Starting extend-filesystems.service... Dec 13 14:13:48.823468 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:13:48.827143 systemd[1]: Starting motdgen.service... Dec 13 14:13:48.831080 systemd[1]: Starting prepare-helm.service... Dec 13 14:13:48.835174 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:13:48.840674 systemd[1]: Starting sshd-keygen.service... Dec 13 14:13:48.848412 systemd[1]: Starting systemd-logind.service... Dec 13 14:13:48.849934 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:48.850086 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:13:48.851013 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:13:48.853426 systemd[1]: Starting update-engine.service... Dec 13 14:13:48.857961 systemd-timesyncd[1773]: Contacted time server 198.30.92.2:123 (0.flatcar.pool.ntp.org). Dec 13 14:13:48.858061 systemd-timesyncd[1773]: Initial clock synchronization to Fri 2024-12-13 14:13:48.901885 UTC. Dec 13 14:13:48.957884 jq[1812]: true Dec 13 14:13:48.958290 jq[1804]: false Dec 13 14:13:48.859061 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:13:48.993812 tar[1816]: linux-arm64/helm Dec 13 14:13:48.872925 systemd-networkd[1540]: eth0: Gained IPv6LL Dec 13 14:13:48.876643 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:13:48.878949 systemd[1]: Reached target network-online.target. Dec 13 14:13:49.006998 extend-filesystems[1805]: Found loop1 Dec 13 14:13:49.006998 extend-filesystems[1805]: Found nvme0n1 Dec 13 14:13:49.006998 extend-filesystems[1805]: Found nvme0n1p1 Dec 13 14:13:49.006998 extend-filesystems[1805]: Found nvme0n1p2 Dec 13 14:13:49.006998 extend-filesystems[1805]: Found nvme0n1p3 Dec 13 14:13:49.006998 extend-filesystems[1805]: Found usr Dec 13 14:13:49.006998 extend-filesystems[1805]: Found nvme0n1p4 Dec 13 14:13:49.006998 extend-filesystems[1805]: Found nvme0n1p6 Dec 13 14:13:49.006998 extend-filesystems[1805]: Found nvme0n1p7 Dec 13 14:13:49.006998 extend-filesystems[1805]: Found nvme0n1p9 Dec 13 14:13:49.006998 extend-filesystems[1805]: Checking size of /dev/nvme0n1p9 Dec 13 14:13:48.883745 systemd[1]: Started amazon-ssm-agent.service. Dec 13 14:13:48.888538 systemd[1]: Starting kubelet.service... Dec 13 14:13:48.892254 systemd[1]: Started nvidia.service. Dec 13 14:13:48.895794 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:13:48.896204 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:13:48.971498 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:13:48.977335 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:13:49.108324 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:13:49.108682 systemd[1]: Finished motdgen.service. Dec 13 14:13:49.138605 jq[1833]: true Dec 13 14:13:49.184871 extend-filesystems[1805]: Resized partition /dev/nvme0n1p9 Dec 13 14:13:49.206808 dbus-daemon[1803]: [system] SELinux support is enabled Dec 13 14:13:49.207946 systemd[1]: Started dbus.service. Dec 13 14:13:49.213234 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:13:49.213280 systemd[1]: Reached target system-config.target. Dec 13 14:13:49.218053 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:13:49.218099 systemd[1]: Reached target user-config.target. Dec 13 14:13:49.224148 dbus-daemon[1803]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1540 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:13:49.230963 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:13:49.234947 extend-filesystems[1850]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:13:49.284588 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 14:13:49.309425 amazon-ssm-agent[1821]: 2024/12/13 14:13:49 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 14:13:49.313276 amazon-ssm-agent[1821]: Initializing new seelog logger Dec 13 14:13:49.313621 amazon-ssm-agent[1821]: New Seelog Logger Creation Complete Dec 13 14:13:49.313798 amazon-ssm-agent[1821]: 2024/12/13 14:13:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:13:49.313798 amazon-ssm-agent[1821]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:13:49.314419 amazon-ssm-agent[1821]: 2024/12/13 14:13:49 processing appconfig overrides Dec 13 14:13:49.339185 update_engine[1811]: I1213 14:13:49.337163 1811 main.cc:92] Flatcar Update Engine starting Dec 13 14:13:49.357012 systemd[1]: Started update-engine.service. Dec 13 14:13:49.362269 systemd[1]: Started locksmithd.service. Dec 13 14:13:49.364866 update_engine[1811]: I1213 14:13:49.361276 1811 update_check_scheduler.cc:74] Next update check in 6m15s Dec 13 14:13:49.367836 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 14:13:49.390212 extend-filesystems[1850]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:13:49.390212 extend-filesystems[1850]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:13:49.390212 extend-filesystems[1850]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 14:13:49.408330 extend-filesystems[1805]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:13:49.396427 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:13:49.396910 systemd[1]: Finished extend-filesystems.service. Dec 13 14:13:49.419953 bash[1876]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:13:49.421737 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:13:49.467380 env[1825]: time="2024-12-13T14:13:49.458517565Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:13:49.562233 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:13:49.573785 systemd-logind[1810]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 14:13:49.573844 systemd-logind[1810]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 14:13:49.581863 systemd-logind[1810]: New seat seat0. Dec 13 14:13:49.594111 systemd[1]: Started systemd-logind.service. Dec 13 14:13:49.654027 env[1825]: time="2024-12-13T14:13:49.653964771Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:13:49.654832 env[1825]: time="2024-12-13T14:13:49.654729016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:49.664639 env[1825]: time="2024-12-13T14:13:49.664081111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:13:49.665062 env[1825]: time="2024-12-13T14:13:49.665012038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:49.667709 env[1825]: time="2024-12-13T14:13:49.667637799Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:13:49.670015 env[1825]: time="2024-12-13T14:13:49.669955273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:49.670267 env[1825]: time="2024-12-13T14:13:49.670232696Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:13:49.670395 env[1825]: time="2024-12-13T14:13:49.670365493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:49.670719 env[1825]: time="2024-12-13T14:13:49.670689231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:49.671316 env[1825]: time="2024-12-13T14:13:49.671278996Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:49.673047 env[1825]: time="2024-12-13T14:13:49.672416013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:13:49.673316 env[1825]: time="2024-12-13T14:13:49.673268689Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:13:49.674490 env[1825]: time="2024-12-13T14:13:49.674367032Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:13:49.674698 env[1825]: time="2024-12-13T14:13:49.674662517Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:13:49.686825 env[1825]: time="2024-12-13T14:13:49.686751811Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:13:49.687037 env[1825]: time="2024-12-13T14:13:49.687006227Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:13:49.687344 env[1825]: time="2024-12-13T14:13:49.687301615Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:13:49.689838 env[1825]: time="2024-12-13T14:13:49.689770802Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:13:49.690117 env[1825]: time="2024-12-13T14:13:49.690075804Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:13:49.690336 env[1825]: time="2024-12-13T14:13:49.690298236Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:13:49.690593 env[1825]: time="2024-12-13T14:13:49.690539102Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:13:49.691375 env[1825]: time="2024-12-13T14:13:49.691309604Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:13:49.692497 env[1825]: time="2024-12-13T14:13:49.692441916Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:13:49.692739 env[1825]: time="2024-12-13T14:13:49.692705188Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:13:49.693394 env[1825]: time="2024-12-13T14:13:49.693332460Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:13:49.693627 env[1825]: time="2024-12-13T14:13:49.693593024Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:13:49.694018 env[1825]: time="2024-12-13T14:13:49.693979719Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:13:49.694339 env[1825]: time="2024-12-13T14:13:49.694302122Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:13:49.694984 env[1825]: time="2024-12-13T14:13:49.694933822Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:13:49.697693 env[1825]: time="2024-12-13T14:13:49.697631890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:13:49.697921 env[1825]: time="2024-12-13T14:13:49.697885560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:13:49.698180 env[1825]: time="2024-12-13T14:13:49.698144680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:13:49.698327 env[1825]: time="2024-12-13T14:13:49.698295250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:13:49.698697 env[1825]: time="2024-12-13T14:13:49.698660670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:13:49.699047 env[1825]: time="2024-12-13T14:13:49.699003806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:13:49.699857 env[1825]: time="2024-12-13T14:13:49.699815533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:13:49.701267 env[1825]: time="2024-12-13T14:13:49.701199204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:13:49.702430 env[1825]: time="2024-12-13T14:13:49.702383620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:13:49.702669 env[1825]: time="2024-12-13T14:13:49.702637506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:13:49.702982 env[1825]: time="2024-12-13T14:13:49.702946768Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:13:49.704182 env[1825]: time="2024-12-13T14:13:49.704133518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:13:49.704426 env[1825]: time="2024-12-13T14:13:49.704392602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:13:49.704585 env[1825]: time="2024-12-13T14:13:49.704530176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:13:49.704843 env[1825]: time="2024-12-13T14:13:49.704809524Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:13:49.706072 env[1825]: time="2024-12-13T14:13:49.705460285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:13:49.706282 env[1825]: time="2024-12-13T14:13:49.706242266Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:13:49.706903 env[1825]: time="2024-12-13T14:13:49.706851453Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:13:49.707788 env[1825]: time="2024-12-13T14:13:49.707113088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:13:49.709776 env[1825]: time="2024-12-13T14:13:49.709622562Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:13:49.713003 env[1825]: time="2024-12-13T14:13:49.712945424Z" level=info msg="Connect containerd service" Dec 13 14:13:49.713503 env[1825]: time="2024-12-13T14:13:49.713457816Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:13:49.716069 env[1825]: time="2024-12-13T14:13:49.715999214Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:13:49.719875 env[1825]: time="2024-12-13T14:13:49.719716617Z" level=info msg="Start subscribing containerd event" Dec 13 14:13:49.720888 env[1825]: time="2024-12-13T14:13:49.720794431Z" level=info msg="Start recovering state" Dec 13 14:13:49.721082 env[1825]: time="2024-12-13T14:13:49.721032494Z" level=info msg="Start event monitor" Dec 13 14:13:49.721198 env[1825]: time="2024-12-13T14:13:49.721120431Z" level=info msg="Start snapshots syncer" Dec 13 14:13:49.721198 env[1825]: time="2024-12-13T14:13:49.721151850Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:13:49.721329 env[1825]: time="2024-12-13T14:13:49.721173461Z" level=info msg="Start streaming server" Dec 13 14:13:49.721519 env[1825]: time="2024-12-13T14:13:49.720715783Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:13:49.724274 env[1825]: time="2024-12-13T14:13:49.721681859Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:13:49.724274 env[1825]: time="2024-12-13T14:13:49.721862415Z" level=info msg="containerd successfully booted in 0.266970s" Dec 13 14:13:49.721976 systemd[1]: Started containerd.service. Dec 13 14:13:49.856447 dbus-daemon[1803]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:13:49.856733 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:13:49.860388 dbus-daemon[1803]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1855 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:13:49.865552 systemd[1]: Starting polkit.service... Dec 13 14:13:49.905419 polkitd[1926]: Started polkitd version 121 Dec 13 14:13:49.943675 polkitd[1926]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:13:49.943806 polkitd[1926]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:13:49.961729 polkitd[1926]: Finished loading, compiling and executing 2 rules Dec 13 14:13:49.968341 dbus-daemon[1803]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:13:49.968752 systemd[1]: Started polkit.service. Dec 13 14:13:49.971643 polkitd[1926]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:13:50.015580 systemd-hostnamed[1855]: Hostname set to (transient) Dec 13 14:13:50.015753 systemd-resolved[1771]: System hostname changed to 'ip-172-31-24-251'. Dec 13 14:13:50.062240 coreos-metadata[1802]: Dec 13 14:13:50.060 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:13:50.063031 coreos-metadata[1802]: Dec 13 14:13:50.062 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 14:13:50.064466 coreos-metadata[1802]: Dec 13 14:13:50.063 INFO Fetch successful Dec 13 14:13:50.064466 coreos-metadata[1802]: Dec 13 14:13:50.063 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:13:50.064718 coreos-metadata[1802]: Dec 13 14:13:50.064 INFO Fetch successful Dec 13 14:13:50.069074 unknown[1802]: wrote ssh authorized keys file for user: core Dec 13 14:13:50.119049 update-ssh-keys[1957]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:13:50.120386 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:13:50.319271 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO Create new startup processor Dec 13 14:13:50.329642 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 14:13:50.331718 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO Initializing bookkeeping folders Dec 13 14:13:50.331922 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO removing the completed state files Dec 13 14:13:50.332082 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO Initializing bookkeeping folders for long running plugins Dec 13 14:13:50.332205 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 14:13:50.332326 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO Initializing healthcheck folders for long running plugins Dec 13 14:13:50.332448 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO Initializing locations for inventory plugin Dec 13 14:13:50.332664 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO Initializing default location for custom inventory Dec 13 14:13:50.332829 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO Initializing default location for file inventory Dec 13 14:13:50.333011 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO Initializing default location for role inventory Dec 13 14:13:50.334269 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO Init the cloudwatchlogs publisher Dec 13 14:13:50.334511 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [instanceID=i-0944507e6722dfa7e] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 14:13:50.334708 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [instanceID=i-0944507e6722dfa7e] Successfully loaded platform independent plugin aws:runDocument Dec 13 14:13:50.334835 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [instanceID=i-0944507e6722dfa7e] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 14:13:50.334993 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [instanceID=i-0944507e6722dfa7e] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 14:13:50.335134 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [instanceID=i-0944507e6722dfa7e] Successfully loaded platform independent plugin aws:configureDocker Dec 13 14:13:50.337468 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [instanceID=i-0944507e6722dfa7e] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 14:13:50.339469 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [instanceID=i-0944507e6722dfa7e] Successfully loaded platform independent plugin aws:configurePackage Dec 13 14:13:50.339704 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [instanceID=i-0944507e6722dfa7e] Successfully loaded platform independent plugin aws:downloadContent Dec 13 14:13:50.339944 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [instanceID=i-0944507e6722dfa7e] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 14:13:50.340073 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [instanceID=i-0944507e6722dfa7e] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 14:13:50.342458 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 14:13:50.344320 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO OS: linux, Arch: arm64 Dec 13 14:13:50.350143 amazon-ssm-agent[1821]: datastore file /var/lib/amazon/ssm/i-0944507e6722dfa7e/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 14:13:50.424167 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 14:13:50.519725 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 14:13:50.614171 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 14:13:50.708676 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0944507e6722dfa7e, requestId: 92bd36ac-6d31-4cfd-92e1-ff8476e3da13 Dec 13 14:13:50.803401 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 14:13:50.898413 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 14:13:50.947829 tar[1816]: linux-arm64/LICENSE Dec 13 14:13:50.948439 tar[1816]: linux-arm64/README.md Dec 13 14:13:50.956415 systemd[1]: Finished prepare-helm.service. Dec 13 14:13:50.993794 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 14:13:51.030980 locksmithd[1880]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:13:51.089158 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessagingDeliveryService] Starting message polling Dec 13 14:13:51.184748 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 14:13:51.280436 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [instanceID=i-0944507e6722dfa7e] Starting association polling Dec 13 14:13:51.360317 sshd_keygen[1835]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:13:51.376345 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 14:13:51.399906 systemd[1]: Finished sshd-keygen.service. Dec 13 14:13:51.404334 systemd[1]: Starting issuegen.service... Dec 13 14:13:51.414802 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:13:51.415166 systemd[1]: Finished issuegen.service. Dec 13 14:13:51.419655 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:13:51.436430 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:13:51.441596 systemd[1]: Started getty@tty1.service. Dec 13 14:13:51.446132 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:13:51.448504 systemd[1]: Reached target getty.target. Dec 13 14:13:51.472541 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 14:13:51.568914 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 14:13:51.665499 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 14:13:51.762247 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 14:13:51.859124 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessageGatewayService] listening reply. Dec 13 14:13:51.956213 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:13:52.053645 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [OfflineService] Starting document processing engine... Dec 13 14:13:52.151054 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [OfflineService] [EngineProcessor] Starting Dec 13 14:13:52.248825 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 14:13:52.346778 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [OfflineService] Starting message polling Dec 13 14:13:52.444910 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [OfflineService] Starting send replies to MDS Dec 13 14:13:52.543196 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 14:13:52.641777 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 14:13:52.696777 systemd[1]: Started kubelet.service. Dec 13 14:13:52.699334 systemd[1]: Reached target multi-user.target. Dec 13 14:13:52.703851 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:13:52.719212 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:13:52.719673 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:13:52.722029 systemd[1]: Startup finished in 1.116s (kernel) + 8.771s (initrd) + 12.359s (userspace) = 22.248s. Dec 13 14:13:52.740342 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 14:13:52.839234 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [StartupProcessor] Executing startup processor tasks Dec 13 14:13:52.938438 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 14:13:53.037705 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 14:13:53.137152 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 14:13:53.236910 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0944507e6722dfa7e?role=subscribe&stream=input Dec 13 14:13:53.336728 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0944507e6722dfa7e?role=subscribe&stream=input Dec 13 14:13:53.436773 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 14:13:53.537087 amazon-ssm-agent[1821]: 2024-12-13 14:13:50 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 14:13:54.653166 kubelet[2021]: E1213 14:13:54.653030 2021 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:13:54.658430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:13:54.658770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:13:54.659204 systemd[1]: kubelet.service: Consumed 1.503s CPU time. Dec 13 14:13:57.272596 systemd[1]: Created slice system-sshd.slice. Dec 13 14:13:57.275096 systemd[1]: Started sshd@0-172.31.24.251:22-139.178.89.65:38162.service. Dec 13 14:13:57.466100 sshd[2030]: Accepted publickey for core from 139.178.89.65 port 38162 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:13:57.471910 sshd[2030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:13:57.492829 systemd[1]: Created slice user-500.slice. Dec 13 14:13:57.495342 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:13:57.506686 systemd-logind[1810]: New session 1 of user core. Dec 13 14:13:57.516905 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:13:57.520450 systemd[1]: Starting user@500.service... Dec 13 14:13:57.528136 (systemd)[2033]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:13:57.722699 systemd[2033]: Queued start job for default target default.target. Dec 13 14:13:57.723940 systemd[2033]: Reached target paths.target. Dec 13 14:13:57.723999 systemd[2033]: Reached target sockets.target. Dec 13 14:13:57.724033 systemd[2033]: Reached target timers.target. Dec 13 14:13:57.724063 systemd[2033]: Reached target basic.target. Dec 13 14:13:57.724171 systemd[2033]: Reached target default.target. Dec 13 14:13:57.724244 systemd[2033]: Startup finished in 183ms. Dec 13 14:13:57.725464 systemd[1]: Started user@500.service. Dec 13 14:13:57.729654 systemd[1]: Started session-1.scope. Dec 13 14:13:57.877837 systemd[1]: Started sshd@1-172.31.24.251:22-139.178.89.65:38166.service. Dec 13 14:13:58.046330 sshd[2042]: Accepted publickey for core from 139.178.89.65 port 38166 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:13:58.049827 sshd[2042]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:13:58.059963 systemd-logind[1810]: New session 2 of user core. Dec 13 14:13:58.060085 systemd[1]: Started session-2.scope. Dec 13 14:13:58.192929 sshd[2042]: pam_unix(sshd:session): session closed for user core Dec 13 14:13:58.198368 systemd-logind[1810]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:13:58.198847 systemd[1]: sshd@1-172.31.24.251:22-139.178.89.65:38166.service: Deactivated successfully. Dec 13 14:13:58.200288 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:13:58.202058 systemd-logind[1810]: Removed session 2. Dec 13 14:13:58.222754 systemd[1]: Started sshd@2-172.31.24.251:22-139.178.89.65:40056.service. Dec 13 14:13:58.400773 sshd[2048]: Accepted publickey for core from 139.178.89.65 port 40056 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:13:58.403630 sshd[2048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:13:58.414299 systemd[1]: Started session-3.scope. Dec 13 14:13:58.415682 systemd-logind[1810]: New session 3 of user core. Dec 13 14:13:58.541755 sshd[2048]: pam_unix(sshd:session): session closed for user core Dec 13 14:13:58.548397 systemd[1]: sshd@2-172.31.24.251:22-139.178.89.65:40056.service: Deactivated successfully. Dec 13 14:13:58.549666 systemd-logind[1810]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:13:58.549781 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:13:58.552204 systemd-logind[1810]: Removed session 3. Dec 13 14:13:58.567928 systemd[1]: Started sshd@3-172.31.24.251:22-139.178.89.65:40070.service. Dec 13 14:13:58.737045 sshd[2054]: Accepted publickey for core from 139.178.89.65 port 40070 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:13:58.740174 sshd[2054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:13:58.748295 systemd-logind[1810]: New session 4 of user core. Dec 13 14:13:58.749231 systemd[1]: Started session-4.scope. Dec 13 14:13:58.879831 sshd[2054]: pam_unix(sshd:session): session closed for user core Dec 13 14:13:58.884452 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:13:58.885686 systemd[1]: sshd@3-172.31.24.251:22-139.178.89.65:40070.service: Deactivated successfully. Dec 13 14:13:58.887242 systemd-logind[1810]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:13:58.889411 systemd-logind[1810]: Removed session 4. Dec 13 14:13:58.909903 systemd[1]: Started sshd@4-172.31.24.251:22-139.178.89.65:40080.service. Dec 13 14:13:59.086356 sshd[2060]: Accepted publickey for core from 139.178.89.65 port 40080 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:13:59.089899 sshd[2060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:13:59.098689 systemd-logind[1810]: New session 5 of user core. Dec 13 14:13:59.099760 systemd[1]: Started session-5.scope. Dec 13 14:13:59.234634 sudo[2063]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:13:59.235298 sudo[2063]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:13:59.286756 systemd[1]: Starting docker.service... Dec 13 14:13:59.365144 env[2073]: time="2024-12-13T14:13:59.365077590Z" level=info msg="Starting up" Dec 13 14:13:59.368045 env[2073]: time="2024-12-13T14:13:59.367962762Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:13:59.368045 env[2073]: time="2024-12-13T14:13:59.368019328Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:13:59.368248 env[2073]: time="2024-12-13T14:13:59.368064057Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:13:59.368248 env[2073]: time="2024-12-13T14:13:59.368089738Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:13:59.371627 env[2073]: time="2024-12-13T14:13:59.371519380Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:13:59.371627 env[2073]: time="2024-12-13T14:13:59.371594357Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:13:59.371815 env[2073]: time="2024-12-13T14:13:59.371629772Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:13:59.371815 env[2073]: time="2024-12-13T14:13:59.371652064Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:13:59.384946 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1849316788-merged.mount: Deactivated successfully. Dec 13 14:13:59.432012 env[2073]: time="2024-12-13T14:13:59.431943840Z" level=info msg="Loading containers: start." Dec 13 14:13:59.646626 kernel: Initializing XFRM netlink socket Dec 13 14:13:59.692382 env[2073]: time="2024-12-13T14:13:59.692304163Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:13:59.696287 (udev-worker)[2084]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:13:59.792372 systemd-networkd[1540]: docker0: Link UP Dec 13 14:13:59.822205 env[2073]: time="2024-12-13T14:13:59.822158606Z" level=info msg="Loading containers: done." Dec 13 14:13:59.845017 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1256556740-merged.mount: Deactivated successfully. Dec 13 14:13:59.857917 env[2073]: time="2024-12-13T14:13:59.857833182Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:13:59.858640 env[2073]: time="2024-12-13T14:13:59.858587619Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:13:59.859108 env[2073]: time="2024-12-13T14:13:59.859063230Z" level=info msg="Daemon has completed initialization" Dec 13 14:13:59.893890 systemd[1]: Started docker.service. Dec 13 14:13:59.910731 env[2073]: time="2024-12-13T14:13:59.910586332Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:14:00.441592 amazon-ssm-agent[1821]: 2024-12-13 14:14:00 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 14:14:01.279146 env[1825]: time="2024-12-13T14:14:01.279047663Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:14:02.011604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3452001189.mount: Deactivated successfully. Dec 13 14:14:04.393160 env[1825]: time="2024-12-13T14:14:04.393098718Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:04.395572 env[1825]: time="2024-12-13T14:14:04.395493255Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:04.399260 env[1825]: time="2024-12-13T14:14:04.399194529Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:04.403016 env[1825]: time="2024-12-13T14:14:04.402953789Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:04.405074 env[1825]: time="2024-12-13T14:14:04.404989822Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 14:14:04.423991 env[1825]: time="2024-12-13T14:14:04.423941334Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:14:04.898920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:14:04.899248 systemd[1]: Stopped kubelet.service. Dec 13 14:14:04.899319 systemd[1]: kubelet.service: Consumed 1.503s CPU time. Dec 13 14:14:04.901860 systemd[1]: Starting kubelet.service... Dec 13 14:14:05.204652 systemd[1]: Started kubelet.service. Dec 13 14:14:05.319240 kubelet[2211]: E1213 14:14:05.319121 2211 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:05.328479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:05.328841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:07.157949 env[1825]: time="2024-12-13T14:14:07.157867975Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:07.164065 env[1825]: time="2024-12-13T14:14:07.163998971Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:07.168217 env[1825]: time="2024-12-13T14:14:07.168152631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:07.172313 env[1825]: time="2024-12-13T14:14:07.172259703Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:07.173833 env[1825]: time="2024-12-13T14:14:07.173783144Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 14:14:07.191255 env[1825]: time="2024-12-13T14:14:07.191162723Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:14:08.928926 env[1825]: time="2024-12-13T14:14:08.928868056Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:08.933442 env[1825]: time="2024-12-13T14:14:08.933379574Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:08.938454 env[1825]: time="2024-12-13T14:14:08.938387380Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:08.942267 env[1825]: time="2024-12-13T14:14:08.942209204Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:08.942927 env[1825]: time="2024-12-13T14:14:08.942882360Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 14:14:08.960627 env[1825]: time="2024-12-13T14:14:08.960550821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:14:10.394870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506604372.mount: Deactivated successfully. Dec 13 14:14:11.225257 env[1825]: time="2024-12-13T14:14:11.225191535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:11.229606 env[1825]: time="2024-12-13T14:14:11.229536484Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:11.232989 env[1825]: time="2024-12-13T14:14:11.232925056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:11.236290 env[1825]: time="2024-12-13T14:14:11.236244319Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:11.237194 env[1825]: time="2024-12-13T14:14:11.237151943Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 14:14:11.254080 env[1825]: time="2024-12-13T14:14:11.254031680Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:14:11.858639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3841274693.mount: Deactivated successfully. Dec 13 14:14:13.448903 env[1825]: time="2024-12-13T14:14:13.448837169Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:13.454142 env[1825]: time="2024-12-13T14:14:13.454083352Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:13.460227 env[1825]: time="2024-12-13T14:14:13.460149326Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:13.465930 env[1825]: time="2024-12-13T14:14:13.465849639Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:13.469020 env[1825]: time="2024-12-13T14:14:13.468938855Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:14:13.487732 env[1825]: time="2024-12-13T14:14:13.487679124Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:14:14.008134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1362210484.mount: Deactivated successfully. Dec 13 14:14:14.022464 env[1825]: time="2024-12-13T14:14:14.022389500Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:14.026881 env[1825]: time="2024-12-13T14:14:14.026821648Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:14.030449 env[1825]: time="2024-12-13T14:14:14.030401772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:14.033663 env[1825]: time="2024-12-13T14:14:14.033618181Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:14.034757 env[1825]: time="2024-12-13T14:14:14.034710949Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 14:14:14.051384 env[1825]: time="2024-12-13T14:14:14.051292072Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:14:14.617882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3032653637.mount: Deactivated successfully. Dec 13 14:14:15.398937 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:14:15.399285 systemd[1]: Stopped kubelet.service. Dec 13 14:14:15.401875 systemd[1]: Starting kubelet.service... Dec 13 14:14:15.694138 systemd[1]: Started kubelet.service. Dec 13 14:14:15.801294 kubelet[2248]: E1213 14:14:15.801226 2248 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:15.806048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:15.806374 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:17.941922 env[1825]: time="2024-12-13T14:14:17.941861751Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:17.946819 env[1825]: time="2024-12-13T14:14:17.946770075Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:17.952196 env[1825]: time="2024-12-13T14:14:17.952129694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:17.957922 env[1825]: time="2024-12-13T14:14:17.957859309Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:17.960530 env[1825]: time="2024-12-13T14:14:17.960472423Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 14:14:20.029111 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:14:24.874373 systemd[1]: Stopped kubelet.service. Dec 13 14:14:24.879506 systemd[1]: Starting kubelet.service... Dec 13 14:14:24.919944 systemd[1]: Reloading. Dec 13 14:14:25.075458 /usr/lib/systemd/system-generators/torcx-generator[2342]: time="2024-12-13T14:14:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:14:25.079750 /usr/lib/systemd/system-generators/torcx-generator[2342]: time="2024-12-13T14:14:25Z" level=info msg="torcx already run" Dec 13 14:14:25.294059 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:14:25.294097 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:14:25.332624 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:14:25.558739 systemd[1]: Started kubelet.service. Dec 13 14:14:25.566733 systemd[1]: Stopping kubelet.service... Dec 13 14:14:25.568338 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:14:25.568965 systemd[1]: Stopped kubelet.service. Dec 13 14:14:25.572717 systemd[1]: Starting kubelet.service... Dec 13 14:14:25.853352 systemd[1]: Started kubelet.service. Dec 13 14:14:25.951610 kubelet[2402]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:25.951610 kubelet[2402]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:14:25.951610 kubelet[2402]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:25.952195 kubelet[2402]: I1213 14:14:25.951719 2402 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:14:26.474036 kubelet[2402]: I1213 14:14:26.473963 2402 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:14:26.474036 kubelet[2402]: I1213 14:14:26.474023 2402 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:14:26.474370 kubelet[2402]: I1213 14:14:26.474353 2402 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:14:26.527322 kubelet[2402]: E1213 14:14:26.527282 2402 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.251:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:26.527617 kubelet[2402]: I1213 14:14:26.527461 2402 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:14:26.546306 kubelet[2402]: I1213 14:14:26.546259 2402 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:14:26.547017 kubelet[2402]: I1213 14:14:26.546994 2402 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:14:26.547443 kubelet[2402]: I1213 14:14:26.547412 2402 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:14:26.547705 kubelet[2402]: I1213 14:14:26.547682 2402 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:14:26.547825 kubelet[2402]: I1213 14:14:26.547804 2402 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:14:26.548116 kubelet[2402]: I1213 14:14:26.548096 2402 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:26.553226 kubelet[2402]: I1213 14:14:26.553194 2402 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:14:26.554002 kubelet[2402]: I1213 14:14:26.553975 2402 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:14:26.554224 kubelet[2402]: I1213 14:14:26.554190 2402 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:14:26.559153 kubelet[2402]: I1213 14:14:26.559114 2402 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:14:26.559358 kubelet[2402]: W1213 14:14:26.554082 2402 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.24.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-251&limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:26.559511 kubelet[2402]: E1213 14:14:26.559489 2402 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-251&limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:26.561044 kubelet[2402]: W1213 14:14:26.560721 2402 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.24.251:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:26.561044 kubelet[2402]: E1213 14:14:26.560865 2402 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.251:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:26.561279 kubelet[2402]: I1213 14:14:26.561145 2402 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:14:26.561890 kubelet[2402]: I1213 14:14:26.561848 2402 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:14:26.570036 kubelet[2402]: W1213 14:14:26.569985 2402 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:14:26.571155 kubelet[2402]: I1213 14:14:26.571099 2402 server.go:1256] "Started kubelet" Dec 13 14:14:26.572997 kubelet[2402]: I1213 14:14:26.572962 2402 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:14:26.574581 kubelet[2402]: I1213 14:14:26.574522 2402 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:14:26.576826 kubelet[2402]: I1213 14:14:26.576757 2402 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:14:26.577231 kubelet[2402]: I1213 14:14:26.577189 2402 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:14:26.579700 kubelet[2402]: E1213 14:14:26.579635 2402 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.251:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.251:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-251.1810c218653bcce4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-251,UID:ip-172-31-24-251,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-251,},FirstTimestamp:2024-12-13 14:14:26.5710625 +0000 UTC m=+0.709027098,LastTimestamp:2024-12-13 14:14:26.5710625 +0000 UTC m=+0.709027098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-251,}" Dec 13 14:14:26.584428 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:14:26.585638 kubelet[2402]: I1213 14:14:26.585008 2402 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:14:26.586036 kubelet[2402]: I1213 14:14:26.586008 2402 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:14:26.589705 kubelet[2402]: I1213 14:14:26.589675 2402 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:14:26.591104 kubelet[2402]: I1213 14:14:26.591071 2402 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:14:26.591861 kubelet[2402]: W1213 14:14:26.591807 2402 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.24.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:26.592039 kubelet[2402]: E1213 14:14:26.592017 2402 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:26.592892 kubelet[2402]: E1213 14:14:26.592821 2402 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-24-251\" not found" Dec 13 14:14:26.593693 kubelet[2402]: E1213 14:14:26.593658 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-251?timeout=10s\": dial tcp 172.31.24.251:6443: connect: connection refused" interval="200ms" Dec 13 14:14:26.594169 kubelet[2402]: E1213 14:14:26.594139 2402 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:14:26.595224 kubelet[2402]: I1213 14:14:26.595191 2402 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:14:26.595540 kubelet[2402]: I1213 14:14:26.595508 2402 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:14:26.597728 kubelet[2402]: I1213 14:14:26.597671 2402 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:14:26.615867 kubelet[2402]: I1213 14:14:26.615827 2402 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:14:26.615867 kubelet[2402]: I1213 14:14:26.615863 2402 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:14:26.616102 kubelet[2402]: I1213 14:14:26.615897 2402 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:26.621178 kubelet[2402]: I1213 14:14:26.621127 2402 policy_none.go:49] "None policy: Start" Dec 13 14:14:26.622426 kubelet[2402]: I1213 14:14:26.622387 2402 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:14:26.622619 kubelet[2402]: I1213 14:14:26.622460 2402 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:14:26.641387 systemd[1]: Created slice kubepods.slice. Dec 13 14:14:26.650833 kubelet[2402]: I1213 14:14:26.650757 2402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:14:26.654865 kubelet[2402]: I1213 14:14:26.654808 2402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:14:26.654865 kubelet[2402]: I1213 14:14:26.654856 2402 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:14:26.655082 kubelet[2402]: I1213 14:14:26.654888 2402 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:14:26.655082 kubelet[2402]: E1213 14:14:26.654981 2402 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:14:26.661058 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:14:26.668843 kubelet[2402]: W1213 14:14:26.668778 2402 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.24.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:26.668843 kubelet[2402]: E1213 14:14:26.668849 2402 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:26.675586 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:14:26.689146 kubelet[2402]: I1213 14:14:26.689111 2402 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:14:26.692540 kubelet[2402]: I1213 14:14:26.692499 2402 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:14:26.697199 kubelet[2402]: E1213 14:14:26.697151 2402 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-251\" not found" Dec 13 14:14:26.699760 kubelet[2402]: I1213 14:14:26.699537 2402 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-251" Dec 13 14:14:26.700255 kubelet[2402]: E1213 14:14:26.700224 2402 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.251:6443/api/v1/nodes\": dial tcp 172.31.24.251:6443: connect: connection refused" node="ip-172-31-24-251" Dec 13 14:14:26.758098 kubelet[2402]: I1213 14:14:26.755384 2402 topology_manager.go:215] "Topology Admit Handler" podUID="2a5d246983e84cb4912ff01665859dd4" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-251" Dec 13 14:14:26.760124 kubelet[2402]: I1213 14:14:26.760087 2402 topology_manager.go:215] "Topology Admit Handler" podUID="ea4eb78860a9e111a0e2406347e46e69" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-251" Dec 13 14:14:26.763267 kubelet[2402]: I1213 14:14:26.763229 2402 topology_manager.go:215] "Topology Admit Handler" podUID="98d482283c55d522584cd2f2d4a14cbf" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-251" Dec 13 14:14:26.776197 systemd[1]: Created slice kubepods-burstable-pod2a5d246983e84cb4912ff01665859dd4.slice. Dec 13 14:14:26.782318 systemd[1]: Created slice kubepods-burstable-podea4eb78860a9e111a0e2406347e46e69.slice. Dec 13 14:14:26.794374 systemd[1]: Created slice kubepods-burstable-pod98d482283c55d522584cd2f2d4a14cbf.slice. Dec 13 14:14:26.795199 kubelet[2402]: E1213 14:14:26.795148 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-251?timeout=10s\": dial tcp 172.31.24.251:6443: connect: connection refused" interval="400ms" Dec 13 14:14:26.798205 kubelet[2402]: I1213 14:14:26.798151 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea4eb78860a9e111a0e2406347e46e69-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-251\" (UID: \"ea4eb78860a9e111a0e2406347e46e69\") " pod="kube-system/kube-controller-manager-ip-172-31-24-251" Dec 13 14:14:26.798376 kubelet[2402]: I1213 14:14:26.798225 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ea4eb78860a9e111a0e2406347e46e69-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-251\" (UID: \"ea4eb78860a9e111a0e2406347e46e69\") " pod="kube-system/kube-controller-manager-ip-172-31-24-251" Dec 13 14:14:26.798376 kubelet[2402]: I1213 14:14:26.798274 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea4eb78860a9e111a0e2406347e46e69-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-251\" (UID: \"ea4eb78860a9e111a0e2406347e46e69\") " pod="kube-system/kube-controller-manager-ip-172-31-24-251" Dec 13 14:14:26.798376 kubelet[2402]: I1213 14:14:26.798318 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a5d246983e84cb4912ff01665859dd4-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-251\" (UID: \"2a5d246983e84cb4912ff01665859dd4\") " pod="kube-system/kube-apiserver-ip-172-31-24-251" Dec 13 14:14:26.798376 kubelet[2402]: I1213 14:14:26.798364 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a5d246983e84cb4912ff01665859dd4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-251\" (UID: \"2a5d246983e84cb4912ff01665859dd4\") " pod="kube-system/kube-apiserver-ip-172-31-24-251" Dec 13 14:14:26.798669 kubelet[2402]: I1213 14:14:26.798408 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea4eb78860a9e111a0e2406347e46e69-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-251\" (UID: \"ea4eb78860a9e111a0e2406347e46e69\") " pod="kube-system/kube-controller-manager-ip-172-31-24-251" Dec 13 14:14:26.798669 kubelet[2402]: I1213 14:14:26.798456 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea4eb78860a9e111a0e2406347e46e69-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-251\" (UID: \"ea4eb78860a9e111a0e2406347e46e69\") " pod="kube-system/kube-controller-manager-ip-172-31-24-251" Dec 13 14:14:26.798669 kubelet[2402]: I1213 14:14:26.798501 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98d482283c55d522584cd2f2d4a14cbf-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-251\" (UID: \"98d482283c55d522584cd2f2d4a14cbf\") " pod="kube-system/kube-scheduler-ip-172-31-24-251" Dec 13 14:14:26.798669 kubelet[2402]: I1213 14:14:26.798641 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a5d246983e84cb4912ff01665859dd4-ca-certs\") pod \"kube-apiserver-ip-172-31-24-251\" (UID: \"2a5d246983e84cb4912ff01665859dd4\") " pod="kube-system/kube-apiserver-ip-172-31-24-251" Dec 13 14:14:26.902345 kubelet[2402]: I1213 14:14:26.902303 2402 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-251" Dec 13 14:14:26.903059 kubelet[2402]: E1213 14:14:26.903027 2402 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.251:6443/api/v1/nodes\": dial tcp 172.31.24.251:6443: connect: connection refused" node="ip-172-31-24-251" Dec 13 14:14:27.091993 env[1825]: time="2024-12-13T14:14:27.090921695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-251,Uid:2a5d246983e84cb4912ff01665859dd4,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:27.093270 env[1825]: time="2024-12-13T14:14:27.093175098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-251,Uid:ea4eb78860a9e111a0e2406347e46e69,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:27.099710 env[1825]: time="2024-12-13T14:14:27.099591136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-251,Uid:98d482283c55d522584cd2f2d4a14cbf,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:27.196362 kubelet[2402]: E1213 14:14:27.196306 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-251?timeout=10s\": dial tcp 172.31.24.251:6443: connect: connection refused" interval="800ms" Dec 13 14:14:27.305047 kubelet[2402]: I1213 14:14:27.304988 2402 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-251" Dec 13 14:14:27.305492 kubelet[2402]: E1213 14:14:27.305462 2402 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.251:6443/api/v1/nodes\": dial tcp 172.31.24.251:6443: connect: connection refused" node="ip-172-31-24-251" Dec 13 14:14:27.492911 kubelet[2402]: W1213 14:14:27.492786 2402 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.24.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-251&limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:27.492911 kubelet[2402]: E1213 14:14:27.492877 2402 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-251&limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:27.605699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1279311850.mount: Deactivated successfully. Dec 13 14:14:27.617882 env[1825]: time="2024-12-13T14:14:27.617824055Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.629390 env[1825]: time="2024-12-13T14:14:27.629337630Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.631502 env[1825]: time="2024-12-13T14:14:27.631458929Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.634879 env[1825]: time="2024-12-13T14:14:27.634831837Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.636973 env[1825]: time="2024-12-13T14:14:27.636917512Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.639805 env[1825]: time="2024-12-13T14:14:27.639754613Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.644717 env[1825]: time="2024-12-13T14:14:27.644666742Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.646211 kubelet[2402]: W1213 14:14:27.646119 2402 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.24.251:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:27.646359 kubelet[2402]: E1213 14:14:27.646240 2402 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.251:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:27.647786 env[1825]: time="2024-12-13T14:14:27.647724600Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.649629 env[1825]: time="2024-12-13T14:14:27.649583012Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.651323 env[1825]: time="2024-12-13T14:14:27.651269004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.657476 env[1825]: time="2024-12-13T14:14:27.657414354Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.659993 env[1825]: time="2024-12-13T14:14:27.659947411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.709327 env[1825]: time="2024-12-13T14:14:27.709195166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:27.709327 env[1825]: time="2024-12-13T14:14:27.709272776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:27.709634 env[1825]: time="2024-12-13T14:14:27.709300335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:27.710442 env[1825]: time="2024-12-13T14:14:27.710330155Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a7fcab066694a82c1f1166d086575559e1cd4811b55e07f26a93daa7c2e5751 pid=2441 runtime=io.containerd.runc.v2 Dec 13 14:14:27.729819 env[1825]: time="2024-12-13T14:14:27.729652302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:27.729819 env[1825]: time="2024-12-13T14:14:27.729757771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:27.730542 env[1825]: time="2024-12-13T14:14:27.729786722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:27.732956 env[1825]: time="2024-12-13T14:14:27.732825111Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/568524d55e3d52d2ad5a93fda2d8eb86b4e56fd12b2836bab20ddee9654ca639 pid=2457 runtime=io.containerd.runc.v2 Dec 13 14:14:27.744759 env[1825]: time="2024-12-13T14:14:27.744493235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:27.744759 env[1825]: time="2024-12-13T14:14:27.744607562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:27.746029 env[1825]: time="2024-12-13T14:14:27.744634725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:27.746497 env[1825]: time="2024-12-13T14:14:27.746391113Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/55c22b3b9c153fbf51c2dd7dad7da43de521d3a46e494a920728c4a02bf85690 pid=2483 runtime=io.containerd.runc.v2 Dec 13 14:14:27.764512 systemd[1]: Started cri-containerd-1a7fcab066694a82c1f1166d086575559e1cd4811b55e07f26a93daa7c2e5751.scope. Dec 13 14:14:27.795333 systemd[1]: Started cri-containerd-568524d55e3d52d2ad5a93fda2d8eb86b4e56fd12b2836bab20ddee9654ca639.scope. Dec 13 14:14:27.804105 kubelet[2402]: W1213 14:14:27.802643 2402 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.24.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:27.804105 kubelet[2402]: E1213 14:14:27.802714 2402 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:27.825022 systemd[1]: Started cri-containerd-55c22b3b9c153fbf51c2dd7dad7da43de521d3a46e494a920728c4a02bf85690.scope. Dec 13 14:14:27.893273 env[1825]: time="2024-12-13T14:14:27.893215919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-251,Uid:ea4eb78860a9e111a0e2406347e46e69,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a7fcab066694a82c1f1166d086575559e1cd4811b55e07f26a93daa7c2e5751\"" Dec 13 14:14:27.901091 env[1825]: time="2024-12-13T14:14:27.900961500Z" level=info msg="CreateContainer within sandbox \"1a7fcab066694a82c1f1166d086575559e1cd4811b55e07f26a93daa7c2e5751\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:14:27.952981 env[1825]: time="2024-12-13T14:14:27.952907987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-251,Uid:2a5d246983e84cb4912ff01665859dd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"568524d55e3d52d2ad5a93fda2d8eb86b4e56fd12b2836bab20ddee9654ca639\"" Dec 13 14:14:27.961941 env[1825]: time="2024-12-13T14:14:27.961867608Z" level=info msg="CreateContainer within sandbox \"568524d55e3d52d2ad5a93fda2d8eb86b4e56fd12b2836bab20ddee9654ca639\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:14:27.967351 env[1825]: time="2024-12-13T14:14:27.967296088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-251,Uid:98d482283c55d522584cd2f2d4a14cbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"55c22b3b9c153fbf51c2dd7dad7da43de521d3a46e494a920728c4a02bf85690\"" Dec 13 14:14:27.969120 env[1825]: time="2024-12-13T14:14:27.969068320Z" level=info msg="CreateContainer within sandbox \"1a7fcab066694a82c1f1166d086575559e1cd4811b55e07f26a93daa7c2e5751\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1ff61cdbbcf11678c3630901b9a0f92652d2114b7b4f20f8260b5adc90f0c382\"" Dec 13 14:14:27.972963 env[1825]: time="2024-12-13T14:14:27.972909172Z" level=info msg="CreateContainer within sandbox \"55c22b3b9c153fbf51c2dd7dad7da43de521d3a46e494a920728c4a02bf85690\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:14:27.973533 env[1825]: time="2024-12-13T14:14:27.973493278Z" level=info msg="StartContainer for \"1ff61cdbbcf11678c3630901b9a0f92652d2114b7b4f20f8260b5adc90f0c382\"" Dec 13 14:14:27.996631 env[1825]: time="2024-12-13T14:14:27.995589992Z" level=info msg="CreateContainer within sandbox \"568524d55e3d52d2ad5a93fda2d8eb86b4e56fd12b2836bab20ddee9654ca639\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8c799e02864bca7fe8a600e80d14881bfed87092e46525ade6682dfbaf0dc62e\"" Dec 13 14:14:27.997279 env[1825]: time="2024-12-13T14:14:27.997232209Z" level=info msg="StartContainer for \"8c799e02864bca7fe8a600e80d14881bfed87092e46525ade6682dfbaf0dc62e\"" Dec 13 14:14:27.998967 kubelet[2402]: E1213 14:14:27.998898 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-251?timeout=10s\": dial tcp 172.31.24.251:6443: connect: connection refused" interval="1.6s" Dec 13 14:14:28.020729 systemd[1]: Started cri-containerd-1ff61cdbbcf11678c3630901b9a0f92652d2114b7b4f20f8260b5adc90f0c382.scope. Dec 13 14:14:28.021876 env[1825]: time="2024-12-13T14:14:28.021787776Z" level=info msg="CreateContainer within sandbox \"55c22b3b9c153fbf51c2dd7dad7da43de521d3a46e494a920728c4a02bf85690\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2f8eb0e33303fd7ee5f06683c37d899c4d5096b75af8f095aa9c63efb3aa7bb0\"" Dec 13 14:14:28.030918 env[1825]: time="2024-12-13T14:14:28.030830419Z" level=info msg="StartContainer for \"2f8eb0e33303fd7ee5f06683c37d899c4d5096b75af8f095aa9c63efb3aa7bb0\"" Dec 13 14:14:28.068312 systemd[1]: Started cri-containerd-8c799e02864bca7fe8a600e80d14881bfed87092e46525ade6682dfbaf0dc62e.scope. Dec 13 14:14:28.082179 kubelet[2402]: W1213 14:14:28.082093 2402 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.24.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:28.082179 kubelet[2402]: E1213 14:14:28.082187 2402 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.251:6443: connect: connection refused Dec 13 14:14:28.099499 systemd[1]: Started cri-containerd-2f8eb0e33303fd7ee5f06683c37d899c4d5096b75af8f095aa9c63efb3aa7bb0.scope. Dec 13 14:14:28.107288 kubelet[2402]: I1213 14:14:28.107245 2402 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-251" Dec 13 14:14:28.110419 kubelet[2402]: E1213 14:14:28.107793 2402 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.251:6443/api/v1/nodes\": dial tcp 172.31.24.251:6443: connect: connection refused" node="ip-172-31-24-251" Dec 13 14:14:28.183825 env[1825]: time="2024-12-13T14:14:28.183742182Z" level=info msg="StartContainer for \"1ff61cdbbcf11678c3630901b9a0f92652d2114b7b4f20f8260b5adc90f0c382\" returns successfully" Dec 13 14:14:28.203705 env[1825]: time="2024-12-13T14:14:28.203627244Z" level=info msg="StartContainer for \"8c799e02864bca7fe8a600e80d14881bfed87092e46525ade6682dfbaf0dc62e\" returns successfully" Dec 13 14:14:28.261874 env[1825]: time="2024-12-13T14:14:28.261720956Z" level=info msg="StartContainer for \"2f8eb0e33303fd7ee5f06683c37d899c4d5096b75af8f095aa9c63efb3aa7bb0\" returns successfully" Dec 13 14:14:29.709986 kubelet[2402]: I1213 14:14:29.709942 2402 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-251" Dec 13 14:14:30.472856 amazon-ssm-agent[1821]: 2024-12-13 14:14:30 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 14:14:31.431803 kubelet[2402]: E1213 14:14:31.431746 2402 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-251\" not found" node="ip-172-31-24-251" Dec 13 14:14:31.466178 kubelet[2402]: I1213 14:14:31.466123 2402 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-251" Dec 13 14:14:31.509837 kubelet[2402]: E1213 14:14:31.509782 2402 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-24-251.1810c218653bcce4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-251,UID:ip-172-31-24-251,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-251,},FirstTimestamp:2024-12-13 14:14:26.5710625 +0000 UTC m=+0.709027098,LastTimestamp:2024-12-13 14:14:26.5710625 +0000 UTC m=+0.709027098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-251,}" Dec 13 14:14:31.562762 kubelet[2402]: I1213 14:14:31.562699 2402 apiserver.go:52] "Watching apiserver" Dec 13 14:14:31.574165 kubelet[2402]: E1213 14:14:31.574115 2402 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-24-251.1810c218669b995e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-251,UID:ip-172-31-24-251,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-24-251,},FirstTimestamp:2024-12-13 14:14:26.594117982 +0000 UTC m=+0.732082580,LastTimestamp:2024-12-13 14:14:26.594117982 +0000 UTC m=+0.732082580,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-251,}" Dec 13 14:14:31.591785 kubelet[2402]: I1213 14:14:31.591717 2402 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:14:34.479699 update_engine[1811]: I1213 14:14:34.479629 1811 update_attempter.cc:509] Updating boot flags... Dec 13 14:14:34.593923 systemd[1]: Reloading. Dec 13 14:14:34.795771 /usr/lib/systemd/system-generators/torcx-generator[2754]: time="2024-12-13T14:14:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:14:34.795836 /usr/lib/systemd/system-generators/torcx-generator[2754]: time="2024-12-13T14:14:34Z" level=info msg="torcx already run" Dec 13 14:14:35.091806 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:14:35.092454 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:14:35.169644 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:14:35.567494 systemd[1]: Stopping kubelet.service... Dec 13 14:14:35.593644 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:14:35.594239 systemd[1]: Stopped kubelet.service. Dec 13 14:14:35.594472 systemd[1]: kubelet.service: Consumed 1.402s CPU time. Dec 13 14:14:35.599999 systemd[1]: Starting kubelet.service... Dec 13 14:14:35.898538 systemd[1]: Started kubelet.service. Dec 13 14:14:36.032305 kubelet[2937]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:36.032305 kubelet[2937]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:14:36.032305 kubelet[2937]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:36.033001 kubelet[2937]: I1213 14:14:36.032384 2937 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:14:36.044627 kubelet[2937]: I1213 14:14:36.044585 2937 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:14:36.044930 kubelet[2937]: I1213 14:14:36.044904 2937 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:14:36.045437 kubelet[2937]: I1213 14:14:36.045407 2937 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:14:36.046725 sudo[2948]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:14:36.047277 sudo[2948]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:14:36.053322 kubelet[2937]: I1213 14:14:36.052272 2937 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:14:36.056475 kubelet[2937]: I1213 14:14:36.056408 2937 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:14:36.067101 kubelet[2937]: I1213 14:14:36.067047 2937 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:14:36.067529 kubelet[2937]: I1213 14:14:36.067491 2937 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:14:36.067843 kubelet[2937]: I1213 14:14:36.067805 2937 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:14:36.068007 kubelet[2937]: I1213 14:14:36.067851 2937 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:14:36.068007 kubelet[2937]: I1213 14:14:36.067873 2937 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:14:36.068007 kubelet[2937]: I1213 14:14:36.067926 2937 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:36.068192 kubelet[2937]: I1213 14:14:36.068097 2937 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:14:36.068192 kubelet[2937]: I1213 14:14:36.068122 2937 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:14:36.069176 kubelet[2937]: I1213 14:14:36.068160 2937 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:14:36.069323 kubelet[2937]: I1213 14:14:36.069198 2937 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:14:36.114171 kubelet[2937]: I1213 14:14:36.114124 2937 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:14:36.114496 kubelet[2937]: I1213 14:14:36.114462 2937 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:14:36.115223 kubelet[2937]: I1213 14:14:36.115188 2937 server.go:1256] "Started kubelet" Dec 13 14:14:36.121630 kubelet[2937]: I1213 14:14:36.118731 2937 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:14:36.121630 kubelet[2937]: I1213 14:14:36.119642 2937 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:14:36.121630 kubelet[2937]: I1213 14:14:36.121630 2937 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:14:36.125313 kubelet[2937]: I1213 14:14:36.125255 2937 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:14:36.130022 kubelet[2937]: I1213 14:14:36.129970 2937 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:14:36.130622 kubelet[2937]: I1213 14:14:36.130548 2937 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:14:36.131021 kubelet[2937]: I1213 14:14:36.130981 2937 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:14:36.144353 kubelet[2937]: I1213 14:14:36.143403 2937 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:14:36.163633 kubelet[2937]: I1213 14:14:36.163454 2937 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:14:36.165388 kubelet[2937]: I1213 14:14:36.165331 2937 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:14:36.166930 kubelet[2937]: E1213 14:14:36.166881 2937 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:14:36.170971 kubelet[2937]: I1213 14:14:36.170919 2937 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:14:36.190960 kubelet[2937]: I1213 14:14:36.190906 2937 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:14:36.193046 kubelet[2937]: I1213 14:14:36.192994 2937 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:14:36.193046 kubelet[2937]: I1213 14:14:36.193040 2937 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:14:36.193254 kubelet[2937]: I1213 14:14:36.193080 2937 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:14:36.193254 kubelet[2937]: E1213 14:14:36.193166 2937 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:14:36.243117 kubelet[2937]: I1213 14:14:36.243068 2937 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-251" Dec 13 14:14:36.259642 kubelet[2937]: I1213 14:14:36.259590 2937 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-24-251" Dec 13 14:14:36.259810 kubelet[2937]: I1213 14:14:36.259719 2937 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-251" Dec 13 14:14:36.294813 kubelet[2937]: E1213 14:14:36.294774 2937 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:14:36.353334 kubelet[2937]: I1213 14:14:36.353282 2937 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:14:36.353334 kubelet[2937]: I1213 14:14:36.353324 2937 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:14:36.353617 kubelet[2937]: I1213 14:14:36.353359 2937 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:36.353735 kubelet[2937]: I1213 14:14:36.353694 2937 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:14:36.353812 kubelet[2937]: I1213 14:14:36.353747 2937 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:14:36.353812 kubelet[2937]: I1213 14:14:36.353775 2937 policy_none.go:49] "None policy: Start" Dec 13 14:14:36.355262 kubelet[2937]: I1213 14:14:36.355212 2937 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:14:36.355262 kubelet[2937]: I1213 14:14:36.355267 2937 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:14:36.355629 kubelet[2937]: I1213 14:14:36.355593 2937 state_mem.go:75] "Updated machine memory state" Dec 13 14:14:36.368242 kubelet[2937]: I1213 14:14:36.368204 2937 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:14:36.371276 kubelet[2937]: I1213 14:14:36.371241 2937 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:14:36.499511 kubelet[2937]: I1213 14:14:36.499359 2937 topology_manager.go:215] "Topology Admit Handler" podUID="ea4eb78860a9e111a0e2406347e46e69" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-251" Dec 13 14:14:36.499839 kubelet[2937]: I1213 14:14:36.499813 2937 topology_manager.go:215] "Topology Admit Handler" podUID="98d482283c55d522584cd2f2d4a14cbf" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-251" Dec 13 14:14:36.500060 kubelet[2937]: I1213 14:14:36.500038 2937 topology_manager.go:215] "Topology Admit Handler" podUID="2a5d246983e84cb4912ff01665859dd4" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-251" Dec 13 14:14:36.535715 kubelet[2937]: I1213 14:14:36.535154 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea4eb78860a9e111a0e2406347e46e69-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-251\" (UID: \"ea4eb78860a9e111a0e2406347e46e69\") " pod="kube-system/kube-controller-manager-ip-172-31-24-251" Dec 13 14:14:36.535715 kubelet[2937]: I1213 14:14:36.535257 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ea4eb78860a9e111a0e2406347e46e69-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-251\" (UID: \"ea4eb78860a9e111a0e2406347e46e69\") " pod="kube-system/kube-controller-manager-ip-172-31-24-251" Dec 13 14:14:36.535715 kubelet[2937]: I1213 14:14:36.535333 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea4eb78860a9e111a0e2406347e46e69-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-251\" (UID: \"ea4eb78860a9e111a0e2406347e46e69\") " pod="kube-system/kube-controller-manager-ip-172-31-24-251" Dec 13 14:14:36.535715 kubelet[2937]: I1213 14:14:36.535408 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea4eb78860a9e111a0e2406347e46e69-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-251\" (UID: \"ea4eb78860a9e111a0e2406347e46e69\") " pod="kube-system/kube-controller-manager-ip-172-31-24-251" Dec 13 14:14:36.535715 kubelet[2937]: I1213 14:14:36.535488 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea4eb78860a9e111a0e2406347e46e69-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-251\" (UID: \"ea4eb78860a9e111a0e2406347e46e69\") " pod="kube-system/kube-controller-manager-ip-172-31-24-251" Dec 13 14:14:36.536161 kubelet[2937]: I1213 14:14:36.535594 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a5d246983e84cb4912ff01665859dd4-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-251\" (UID: \"2a5d246983e84cb4912ff01665859dd4\") " pod="kube-system/kube-apiserver-ip-172-31-24-251" Dec 13 14:14:36.536161 kubelet[2937]: I1213 14:14:36.535731 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98d482283c55d522584cd2f2d4a14cbf-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-251\" (UID: \"98d482283c55d522584cd2f2d4a14cbf\") " pod="kube-system/kube-scheduler-ip-172-31-24-251" Dec 13 14:14:36.536161 kubelet[2937]: I1213 14:14:36.535781 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a5d246983e84cb4912ff01665859dd4-ca-certs\") pod \"kube-apiserver-ip-172-31-24-251\" (UID: \"2a5d246983e84cb4912ff01665859dd4\") " pod="kube-system/kube-apiserver-ip-172-31-24-251" Dec 13 14:14:36.536161 kubelet[2937]: I1213 14:14:36.535858 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a5d246983e84cb4912ff01665859dd4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-251\" (UID: \"2a5d246983e84cb4912ff01665859dd4\") " pod="kube-system/kube-apiserver-ip-172-31-24-251" Dec 13 14:14:37.078348 kubelet[2937]: I1213 14:14:37.078284 2937 apiserver.go:52] "Watching apiserver" Dec 13 14:14:37.122855 sudo[2948]: pam_unix(sudo:session): session closed for user root Dec 13 14:14:37.131550 kubelet[2937]: I1213 14:14:37.131493 2937 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:14:37.298807 kubelet[2937]: I1213 14:14:37.298738 2937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-251" podStartSLOduration=1.2986509929999999 podStartE2EDuration="1.298650993s" podCreationTimestamp="2024-12-13 14:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:37.281225148 +0000 UTC m=+1.367737417" watchObservedRunningTime="2024-12-13 14:14:37.298650993 +0000 UTC m=+1.385163250" Dec 13 14:14:37.312113 kubelet[2937]: I1213 14:14:37.312071 2937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-251" podStartSLOduration=1.312014571 podStartE2EDuration="1.312014571s" podCreationTimestamp="2024-12-13 14:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:37.299105465 +0000 UTC m=+1.385617746" watchObservedRunningTime="2024-12-13 14:14:37.312014571 +0000 UTC m=+1.398526804" Dec 13 14:14:37.335461 kubelet[2937]: I1213 14:14:37.335327 2937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-251" podStartSLOduration=1.3352615 podStartE2EDuration="1.3352615s" podCreationTimestamp="2024-12-13 14:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:37.315964095 +0000 UTC m=+1.402476352" watchObservedRunningTime="2024-12-13 14:14:37.3352615 +0000 UTC m=+1.421773757" Dec 13 14:14:40.494704 sudo[2063]: pam_unix(sudo:session): session closed for user root Dec 13 14:14:40.518265 sshd[2060]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:40.523448 systemd-logind[1810]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:14:40.523806 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:14:40.524132 systemd[1]: session-5.scope: Consumed 11.046s CPU time. Dec 13 14:14:40.526059 systemd-logind[1810]: Removed session 5. Dec 13 14:14:40.526941 systemd[1]: sshd@4-172.31.24.251:22-139.178.89.65:40080.service: Deactivated successfully. Dec 13 14:14:48.801968 kubelet[2937]: I1213 14:14:48.801917 2937 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:14:48.803405 env[1825]: time="2024-12-13T14:14:48.803353139Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:14:48.804540 kubelet[2937]: I1213 14:14:48.804504 2937 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:14:49.122330 kubelet[2937]: I1213 14:14:49.122176 2937 topology_manager.go:215] "Topology Admit Handler" podUID="666fdc71-29e9-47f9-860d-0a6c19f70292" podNamespace="kube-system" podName="cilium-operator-5cc964979-98jv4" Dec 13 14:14:49.133818 systemd[1]: Created slice kubepods-besteffort-pod666fdc71_29e9_47f9_860d_0a6c19f70292.slice. Dec 13 14:14:49.216827 kubelet[2937]: I1213 14:14:49.216771 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/666fdc71-29e9-47f9-860d-0a6c19f70292-cilium-config-path\") pod \"cilium-operator-5cc964979-98jv4\" (UID: \"666fdc71-29e9-47f9-860d-0a6c19f70292\") " pod="kube-system/cilium-operator-5cc964979-98jv4" Dec 13 14:14:49.217714 kubelet[2937]: I1213 14:14:49.217680 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtfj2\" (UniqueName: \"kubernetes.io/projected/666fdc71-29e9-47f9-860d-0a6c19f70292-kube-api-access-mtfj2\") pod \"cilium-operator-5cc964979-98jv4\" (UID: \"666fdc71-29e9-47f9-860d-0a6c19f70292\") " pod="kube-system/cilium-operator-5cc964979-98jv4" Dec 13 14:14:49.233271 kubelet[2937]: I1213 14:14:49.233209 2937 topology_manager.go:215] "Topology Admit Handler" podUID="9a9b2c97-7ce9-4375-9824-9fe189b1b749" podNamespace="kube-system" podName="cilium-f5brp" Dec 13 14:14:49.244851 systemd[1]: Created slice kubepods-burstable-pod9a9b2c97_7ce9_4375_9824_9fe189b1b749.slice. Dec 13 14:14:49.253047 kubelet[2937]: I1213 14:14:49.252979 2937 topology_manager.go:215] "Topology Admit Handler" podUID="6afbbac6-f98d-4c9d-8d05-a4b62673d0e2" podNamespace="kube-system" podName="kube-proxy-sv8lj" Dec 13 14:14:49.264362 systemd[1]: Created slice kubepods-besteffort-pod6afbbac6_f98d_4c9d_8d05_a4b62673d0e2.slice. Dec 13 14:14:49.267510 kubelet[2937]: W1213 14:14:49.267447 2937 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-24-251" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-251' and this object Dec 13 14:14:49.267815 kubelet[2937]: E1213 14:14:49.267770 2937 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-24-251" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-251' and this object Dec 13 14:14:49.278531 kubelet[2937]: W1213 14:14:49.278469 2937 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-24-251" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-251' and this object Dec 13 14:14:49.278798 kubelet[2937]: E1213 14:14:49.278775 2937 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-24-251" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-251' and this object Dec 13 14:14:49.318157 kubelet[2937]: I1213 14:14:49.318114 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-bpf-maps\") pod \"cilium-f5brp\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " pod="kube-system/cilium-f5brp" Dec 13 14:14:49.318437 kubelet[2937]: I1213 14:14:49.318410 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cilium-cgroup\") pod \"cilium-f5brp\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " pod="kube-system/cilium-f5brp" Dec 13 14:14:49.318634 kubelet[2937]: I1213 14:14:49.318609 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk7jl\" (UniqueName: \"kubernetes.io/projected/6afbbac6-f98d-4c9d-8d05-a4b62673d0e2-kube-api-access-dk7jl\") pod \"kube-proxy-sv8lj\" (UID: \"6afbbac6-f98d-4c9d-8d05-a4b62673d0e2\") " pod="kube-system/kube-proxy-sv8lj" Dec 13 14:14:49.318794 kubelet[2937]: I1213 14:14:49.318771 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cilium-run\") pod \"cilium-f5brp\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " pod="kube-system/cilium-f5brp" Dec 13 14:14:49.318939 kubelet[2937]: I1213 14:14:49.318917 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-etc-cni-netd\") pod \"cilium-f5brp\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " pod="kube-system/cilium-f5brp" Dec 13 14:14:49.319095 kubelet[2937]: I1213 14:14:49.319073 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a9b2c97-7ce9-4375-9824-9fe189b1b749-hubble-tls\") pod \"cilium-f5brp\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " pod="kube-system/cilium-f5brp" Dec 13 14:14:49.319262 kubelet[2937]: I1213 14:14:49.319240 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcg5q\" (UniqueName: \"kubernetes.io/projected/9a9b2c97-7ce9-4375-9824-9fe189b1b749-kube-api-access-lcg5q\") pod \"cilium-f5brp\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " pod="kube-system/cilium-f5brp" Dec 13 14:14:49.321079 kubelet[2937]: I1213 14:14:49.319412 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6afbbac6-f98d-4c9d-8d05-a4b62673d0e2-lib-modules\") pod \"kube-proxy-sv8lj\" (UID: \"6afbbac6-f98d-4c9d-8d05-a4b62673d0e2\") " pod="kube-system/kube-proxy-sv8lj" Dec 13 14:14:49.321368 kubelet[2937]: I1213 14:14:49.321338 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cni-path\") pod \"cilium-f5brp\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " pod="kube-system/cilium-f5brp" Dec 13 14:14:49.321717 kubelet[2937]: I1213 14:14:49.321680 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a9b2c97-7ce9-4375-9824-9fe189b1b749-clustermesh-secrets\") pod \"cilium-f5brp\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " pod="kube-system/cilium-f5brp" Dec 13 14:14:49.321835 kubelet[2937]: I1213 14:14:49.321751 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6afbbac6-f98d-4c9d-8d05-a4b62673d0e2-kube-proxy\") pod \"kube-proxy-sv8lj\" (UID: \"6afbbac6-f98d-4c9d-8d05-a4b62673d0e2\") " pod="kube-system/kube-proxy-sv8lj" Dec 13 14:14:49.321835 kubelet[2937]: I1213 14:14:49.321802 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6afbbac6-f98d-4c9d-8d05-a4b62673d0e2-xtables-lock\") pod \"kube-proxy-sv8lj\" (UID: \"6afbbac6-f98d-4c9d-8d05-a4b62673d0e2\") " pod="kube-system/kube-proxy-sv8lj" Dec 13 14:14:49.321974 kubelet[2937]: I1213 14:14:49.321871 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-hostproc\") pod \"cilium-f5brp\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " pod="kube-system/cilium-f5brp" Dec 13 14:14:49.321974 kubelet[2937]: I1213 14:14:49.321916 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-host-proc-sys-kernel\") pod \"cilium-f5brp\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " pod="kube-system/cilium-f5brp" Dec 13 14:14:49.321974 kubelet[2937]: I1213 14:14:49.321959 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-lib-modules\") pod \"cilium-f5brp\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " pod="kube-system/cilium-f5brp" Dec 13 14:14:49.322190 kubelet[2937]: I1213 14:14:49.322025 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-xtables-lock\") pod \"cilium-f5brp\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " pod="kube-system/cilium-f5brp" Dec 13 14:14:49.322190 kubelet[2937]: I1213 14:14:49.322071 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cilium-config-path\") pod \"cilium-f5brp\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " pod="kube-system/cilium-f5brp" Dec 13 14:14:49.322190 kubelet[2937]: I1213 14:14:49.322122 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-host-proc-sys-net\") pod \"cilium-f5brp\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " pod="kube-system/cilium-f5brp" Dec 13 14:14:49.453305 env[1825]: time="2024-12-13T14:14:49.453205233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-98jv4,Uid:666fdc71-29e9-47f9-860d-0a6c19f70292,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:49.509398 env[1825]: time="2024-12-13T14:14:49.509281930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:49.509763 env[1825]: time="2024-12-13T14:14:49.509685345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:49.509964 env[1825]: time="2024-12-13T14:14:49.509907573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:49.510510 env[1825]: time="2024-12-13T14:14:49.510388597Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9151158ca0f36cd19229f0cbf113091bc074b944572eceddf2cb2a2fc12ce26c pid=3024 runtime=io.containerd.runc.v2 Dec 13 14:14:49.536755 systemd[1]: Started cri-containerd-9151158ca0f36cd19229f0cbf113091bc074b944572eceddf2cb2a2fc12ce26c.scope. Dec 13 14:14:49.612222 env[1825]: time="2024-12-13T14:14:49.611864138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-98jv4,Uid:666fdc71-29e9-47f9-860d-0a6c19f70292,Namespace:kube-system,Attempt:0,} returns sandbox id \"9151158ca0f36cd19229f0cbf113091bc074b944572eceddf2cb2a2fc12ce26c\"" Dec 13 14:14:49.618139 env[1825]: time="2024-12-13T14:14:49.618085991Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:14:50.450915 env[1825]: time="2024-12-13T14:14:50.450819467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f5brp,Uid:9a9b2c97-7ce9-4375-9824-9fe189b1b749,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:50.473433 env[1825]: time="2024-12-13T14:14:50.472739726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sv8lj,Uid:6afbbac6-f98d-4c9d-8d05-a4b62673d0e2,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:50.493196 env[1825]: time="2024-12-13T14:14:50.493054070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:50.493541 env[1825]: time="2024-12-13T14:14:50.493432162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:50.493766 env[1825]: time="2024-12-13T14:14:50.493707349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:50.494306 env[1825]: time="2024-12-13T14:14:50.494224349Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7 pid=3065 runtime=io.containerd.runc.v2 Dec 13 14:14:50.524654 systemd[1]: Started cri-containerd-13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7.scope. Dec 13 14:14:50.572988 env[1825]: time="2024-12-13T14:14:50.572886527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:50.573228 env[1825]: time="2024-12-13T14:14:50.573181419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:50.573458 env[1825]: time="2024-12-13T14:14:50.573411627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:50.574209 env[1825]: time="2024-12-13T14:14:50.574114217Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e6155fced64d25ce8a0b9800fb65ae8d3eae4869a84625fb9b975f2c821cabb pid=3100 runtime=io.containerd.runc.v2 Dec 13 14:14:50.595408 env[1825]: time="2024-12-13T14:14:50.595348399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f5brp,Uid:9a9b2c97-7ce9-4375-9824-9fe189b1b749,Namespace:kube-system,Attempt:0,} returns sandbox id \"13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7\"" Dec 13 14:14:50.611296 systemd[1]: Started cri-containerd-3e6155fced64d25ce8a0b9800fb65ae8d3eae4869a84625fb9b975f2c821cabb.scope. Dec 13 14:14:50.663187 env[1825]: time="2024-12-13T14:14:50.663127096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sv8lj,Uid:6afbbac6-f98d-4c9d-8d05-a4b62673d0e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e6155fced64d25ce8a0b9800fb65ae8d3eae4869a84625fb9b975f2c821cabb\"" Dec 13 14:14:50.670147 env[1825]: time="2024-12-13T14:14:50.669909621Z" level=info msg="CreateContainer within sandbox \"3e6155fced64d25ce8a0b9800fb65ae8d3eae4869a84625fb9b975f2c821cabb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:14:50.704375 env[1825]: time="2024-12-13T14:14:50.704202077Z" level=info msg="CreateContainer within sandbox \"3e6155fced64d25ce8a0b9800fb65ae8d3eae4869a84625fb9b975f2c821cabb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0ff8284bf77632c81bacde206cb87cc2de7c07ea2750b781a99bdd9791249fd8\"" Dec 13 14:14:50.707998 env[1825]: time="2024-12-13T14:14:50.707938598Z" level=info msg="StartContainer for \"0ff8284bf77632c81bacde206cb87cc2de7c07ea2750b781a99bdd9791249fd8\"" Dec 13 14:14:50.741298 systemd[1]: Started cri-containerd-0ff8284bf77632c81bacde206cb87cc2de7c07ea2750b781a99bdd9791249fd8.scope. Dec 13 14:14:50.830431 env[1825]: time="2024-12-13T14:14:50.830326616Z" level=info msg="StartContainer for \"0ff8284bf77632c81bacde206cb87cc2de7c07ea2750b781a99bdd9791249fd8\" returns successfully" Dec 13 14:14:52.316654 env[1825]: time="2024-12-13T14:14:52.316596835Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:52.318714 env[1825]: time="2024-12-13T14:14:52.318664941Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:52.321235 env[1825]: time="2024-12-13T14:14:52.321173576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:52.322612 env[1825]: time="2024-12-13T14:14:52.322529568Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 14:14:52.326111 env[1825]: time="2024-12-13T14:14:52.325894071Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:14:52.331424 env[1825]: time="2024-12-13T14:14:52.331345905Z" level=info msg="CreateContainer within sandbox \"9151158ca0f36cd19229f0cbf113091bc074b944572eceddf2cb2a2fc12ce26c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:14:52.352887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1121024472.mount: Deactivated successfully. Dec 13 14:14:52.362270 env[1825]: time="2024-12-13T14:14:52.362212948Z" level=info msg="CreateContainer within sandbox \"9151158ca0f36cd19229f0cbf113091bc074b944572eceddf2cb2a2fc12ce26c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f\"" Dec 13 14:14:52.365548 env[1825]: time="2024-12-13T14:14:52.364976810Z" level=info msg="StartContainer for \"98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f\"" Dec 13 14:14:52.404120 systemd[1]: Started cri-containerd-98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f.scope. Dec 13 14:14:52.420133 systemd[1]: run-containerd-runc-k8s.io-98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f-runc.xaTNck.mount: Deactivated successfully. Dec 13 14:14:52.475738 env[1825]: time="2024-12-13T14:14:52.475667570Z" level=info msg="StartContainer for \"98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f\" returns successfully" Dec 13 14:14:53.409188 kubelet[2937]: I1213 14:14:53.409126 2937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-sv8lj" podStartSLOduration=4.409062589 podStartE2EDuration="4.409062589s" podCreationTimestamp="2024-12-13 14:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:51.299038926 +0000 UTC m=+15.385551195" watchObservedRunningTime="2024-12-13 14:14:53.409062589 +0000 UTC m=+17.495574834" Dec 13 14:14:56.217743 kubelet[2937]: I1213 14:14:56.217689 2937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-98jv4" podStartSLOduration=4.508914214 podStartE2EDuration="7.217625252s" podCreationTimestamp="2024-12-13 14:14:49 +0000 UTC" firstStartedPulling="2024-12-13 14:14:49.614548236 +0000 UTC m=+13.701060493" lastFinishedPulling="2024-12-13 14:14:52.323259274 +0000 UTC m=+16.409771531" observedRunningTime="2024-12-13 14:14:53.410595363 +0000 UTC m=+17.497107632" watchObservedRunningTime="2024-12-13 14:14:56.217625252 +0000 UTC m=+20.304137509" Dec 13 14:15:01.168272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824565842.mount: Deactivated successfully. Dec 13 14:15:05.226868 env[1825]: time="2024-12-13T14:15:05.226799221Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:05.231267 env[1825]: time="2024-12-13T14:15:05.231209174Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:05.239749 env[1825]: time="2024-12-13T14:15:05.239666289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:05.242839 env[1825]: time="2024-12-13T14:15:05.242764898Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 14:15:05.249139 env[1825]: time="2024-12-13T14:15:05.249082577Z" level=info msg="CreateContainer within sandbox \"13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:15:05.272048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2398677638.mount: Deactivated successfully. Dec 13 14:15:05.286223 env[1825]: time="2024-12-13T14:15:05.286161190Z" level=info msg="CreateContainer within sandbox \"13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588\"" Dec 13 14:15:05.288080 env[1825]: time="2024-12-13T14:15:05.288008914Z" level=info msg="StartContainer for \"564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588\"" Dec 13 14:15:05.339385 systemd[1]: Started cri-containerd-564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588.scope. Dec 13 14:15:05.399944 env[1825]: time="2024-12-13T14:15:05.399864198Z" level=info msg="StartContainer for \"564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588\" returns successfully" Dec 13 14:15:05.420205 systemd[1]: cri-containerd-564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588.scope: Deactivated successfully. Dec 13 14:15:06.092345 env[1825]: time="2024-12-13T14:15:06.092273549Z" level=info msg="shim disconnected" id=564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588 Dec 13 14:15:06.092720 env[1825]: time="2024-12-13T14:15:06.092344377Z" level=warning msg="cleaning up after shim disconnected" id=564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588 namespace=k8s.io Dec 13 14:15:06.092720 env[1825]: time="2024-12-13T14:15:06.092366770Z" level=info msg="cleaning up dead shim" Dec 13 14:15:06.106393 env[1825]: time="2024-12-13T14:15:06.106307934Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3392 runtime=io.containerd.runc.v2\n" Dec 13 14:15:06.265253 systemd[1]: run-containerd-runc-k8s.io-564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588-runc.Ny4dhe.mount: Deactivated successfully. Dec 13 14:15:06.265406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588-rootfs.mount: Deactivated successfully. Dec 13 14:15:06.336640 env[1825]: time="2024-12-13T14:15:06.336538642Z" level=info msg="CreateContainer within sandbox \"13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:15:06.370902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3838232495.mount: Deactivated successfully. Dec 13 14:15:06.385279 env[1825]: time="2024-12-13T14:15:06.385198101Z" level=info msg="CreateContainer within sandbox \"13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7\"" Dec 13 14:15:06.386426 env[1825]: time="2024-12-13T14:15:06.386373884Z" level=info msg="StartContainer for \"cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7\"" Dec 13 14:15:06.437178 systemd[1]: Started cri-containerd-cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7.scope. Dec 13 14:15:06.495428 env[1825]: time="2024-12-13T14:15:06.495364435Z" level=info msg="StartContainer for \"cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7\" returns successfully" Dec 13 14:15:06.518108 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:15:06.519222 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:15:06.519698 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:15:06.528339 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:15:06.540702 systemd[1]: cri-containerd-cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7.scope: Deactivated successfully. Dec 13 14:15:06.562762 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:15:06.611926 env[1825]: time="2024-12-13T14:15:06.611864641Z" level=info msg="shim disconnected" id=cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7 Dec 13 14:15:06.612436 env[1825]: time="2024-12-13T14:15:06.612400804Z" level=warning msg="cleaning up after shim disconnected" id=cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7 namespace=k8s.io Dec 13 14:15:06.612586 env[1825]: time="2024-12-13T14:15:06.612533315Z" level=info msg="cleaning up dead shim" Dec 13 14:15:06.628626 env[1825]: time="2024-12-13T14:15:06.627451509Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3457 runtime=io.containerd.runc.v2\n" Dec 13 14:15:07.264846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7-rootfs.mount: Deactivated successfully. Dec 13 14:15:07.337248 env[1825]: time="2024-12-13T14:15:07.337183819Z" level=info msg="CreateContainer within sandbox \"13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:15:07.391726 env[1825]: time="2024-12-13T14:15:07.391641430Z" level=info msg="CreateContainer within sandbox \"13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750\"" Dec 13 14:15:07.394021 env[1825]: time="2024-12-13T14:15:07.392857546Z" level=info msg="StartContainer for \"f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750\"" Dec 13 14:15:07.455253 systemd[1]: Started cri-containerd-f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750.scope. Dec 13 14:15:07.556335 env[1825]: time="2024-12-13T14:15:07.556191197Z" level=info msg="StartContainer for \"f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750\" returns successfully" Dec 13 14:15:07.562043 systemd[1]: cri-containerd-f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750.scope: Deactivated successfully. Dec 13 14:15:07.612109 env[1825]: time="2024-12-13T14:15:07.612032693Z" level=info msg="shim disconnected" id=f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750 Dec 13 14:15:07.612445 env[1825]: time="2024-12-13T14:15:07.612410543Z" level=warning msg="cleaning up after shim disconnected" id=f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750 namespace=k8s.io Dec 13 14:15:07.612601 env[1825]: time="2024-12-13T14:15:07.612573715Z" level=info msg="cleaning up dead shim" Dec 13 14:15:07.626765 env[1825]: time="2024-12-13T14:15:07.626709715Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3513 runtime=io.containerd.runc.v2\n" Dec 13 14:15:08.264899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750-rootfs.mount: Deactivated successfully. Dec 13 14:15:08.347801 env[1825]: time="2024-12-13T14:15:08.347350015Z" level=info msg="CreateContainer within sandbox \"13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:15:08.385278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2657467015.mount: Deactivated successfully. Dec 13 14:15:08.397914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2140590359.mount: Deactivated successfully. Dec 13 14:15:08.409658 env[1825]: time="2024-12-13T14:15:08.406753485Z" level=info msg="CreateContainer within sandbox \"13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b\"" Dec 13 14:15:08.409658 env[1825]: time="2024-12-13T14:15:08.407813472Z" level=info msg="StartContainer for \"c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b\"" Dec 13 14:15:08.462432 systemd[1]: Started cri-containerd-c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b.scope. Dec 13 14:15:08.516910 systemd[1]: cri-containerd-c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b.scope: Deactivated successfully. Dec 13 14:15:08.520736 env[1825]: time="2024-12-13T14:15:08.519502727Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a9b2c97_7ce9_4375_9824_9fe189b1b749.slice/cri-containerd-c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b.scope/memory.events\": no such file or directory" Dec 13 14:15:08.525321 env[1825]: time="2024-12-13T14:15:08.525259619Z" level=info msg="StartContainer for \"c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b\" returns successfully" Dec 13 14:15:08.568636 env[1825]: time="2024-12-13T14:15:08.568407386Z" level=info msg="shim disconnected" id=c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b Dec 13 14:15:08.569031 env[1825]: time="2024-12-13T14:15:08.568983354Z" level=warning msg="cleaning up after shim disconnected" id=c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b namespace=k8s.io Dec 13 14:15:08.569297 env[1825]: time="2024-12-13T14:15:08.569252923Z" level=info msg="cleaning up dead shim" Dec 13 14:15:08.585751 env[1825]: time="2024-12-13T14:15:08.585687458Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3567 runtime=io.containerd.runc.v2\n" Dec 13 14:15:09.355167 env[1825]: time="2024-12-13T14:15:09.355086192Z" level=info msg="CreateContainer within sandbox \"13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:15:09.393098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2087030196.mount: Deactivated successfully. Dec 13 14:15:09.403962 env[1825]: time="2024-12-13T14:15:09.403901038Z" level=info msg="CreateContainer within sandbox \"13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8\"" Dec 13 14:15:09.406159 env[1825]: time="2024-12-13T14:15:09.406109621Z" level=info msg="StartContainer for \"3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8\"" Dec 13 14:15:09.445764 systemd[1]: Started cri-containerd-3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8.scope. Dec 13 14:15:09.514414 env[1825]: time="2024-12-13T14:15:09.514324047Z" level=info msg="StartContainer for \"3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8\" returns successfully" Dec 13 14:15:09.681371 kubelet[2937]: I1213 14:15:09.680241 2937 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:15:09.733604 kubelet[2937]: I1213 14:15:09.733150 2937 topology_manager.go:215] "Topology Admit Handler" podUID="c577a912-3ccc-4a5e-8450-b80c05d58048" podNamespace="kube-system" podName="coredns-76f75df574-psq7n" Dec 13 14:15:09.747368 systemd[1]: Created slice kubepods-burstable-podc577a912_3ccc_4a5e_8450_b80c05d58048.slice. Dec 13 14:15:09.751411 kubelet[2937]: I1213 14:15:09.751352 2937 topology_manager.go:215] "Topology Admit Handler" podUID="7e34c595-0204-424f-a70c-85e904d8dbfb" podNamespace="kube-system" podName="coredns-76f75df574-96sks" Dec 13 14:15:09.776320 kubelet[2937]: I1213 14:15:09.776278 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e34c595-0204-424f-a70c-85e904d8dbfb-config-volume\") pod \"coredns-76f75df574-96sks\" (UID: \"7e34c595-0204-424f-a70c-85e904d8dbfb\") " pod="kube-system/coredns-76f75df574-96sks" Dec 13 14:15:09.776620 kubelet[2937]: I1213 14:15:09.776594 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64ffs\" (UniqueName: \"kubernetes.io/projected/7e34c595-0204-424f-a70c-85e904d8dbfb-kube-api-access-64ffs\") pod \"coredns-76f75df574-96sks\" (UID: \"7e34c595-0204-424f-a70c-85e904d8dbfb\") " pod="kube-system/coredns-76f75df574-96sks" Dec 13 14:15:09.776914 kubelet[2937]: I1213 14:15:09.776889 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c577a912-3ccc-4a5e-8450-b80c05d58048-config-volume\") pod \"coredns-76f75df574-psq7n\" (UID: \"c577a912-3ccc-4a5e-8450-b80c05d58048\") " pod="kube-system/coredns-76f75df574-psq7n" Dec 13 14:15:09.777103 kubelet[2937]: I1213 14:15:09.777080 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzwx2\" (UniqueName: \"kubernetes.io/projected/c577a912-3ccc-4a5e-8450-b80c05d58048-kube-api-access-bzwx2\") pod \"coredns-76f75df574-psq7n\" (UID: \"c577a912-3ccc-4a5e-8450-b80c05d58048\") " pod="kube-system/coredns-76f75df574-psq7n" Dec 13 14:15:09.778598 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:15:09.781723 systemd[1]: Created slice kubepods-burstable-pod7e34c595_0204_424f_a70c_85e904d8dbfb.slice. Dec 13 14:15:10.062146 env[1825]: time="2024-12-13T14:15:10.061499612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-psq7n,Uid:c577a912-3ccc-4a5e-8450-b80c05d58048,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:10.092926 env[1825]: time="2024-12-13T14:15:10.092865414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-96sks,Uid:7e34c595-0204-424f-a70c-85e904d8dbfb,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:10.624595 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:15:12.461853 (udev-worker)[3692]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:12.463486 (udev-worker)[3726]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:12.464582 systemd-networkd[1540]: cilium_host: Link UP Dec 13 14:15:12.468736 systemd-networkd[1540]: cilium_net: Link UP Dec 13 14:15:12.470037 systemd-networkd[1540]: cilium_net: Gained carrier Dec 13 14:15:12.471710 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:15:12.471833 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:15:12.472104 systemd-networkd[1540]: cilium_host: Gained carrier Dec 13 14:15:12.640603 (udev-worker)[3738]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:12.654733 systemd-networkd[1540]: cilium_vxlan: Link UP Dec 13 14:15:12.654959 systemd-networkd[1540]: cilium_vxlan: Gained carrier Dec 13 14:15:12.856729 systemd-networkd[1540]: cilium_host: Gained IPv6LL Dec 13 14:15:12.888744 systemd-networkd[1540]: cilium_net: Gained IPv6LL Dec 13 14:15:13.137601 kernel: NET: Registered PF_ALG protocol family Dec 13 14:15:14.476822 systemd-networkd[1540]: lxc_health: Link UP Dec 13 14:15:14.510098 systemd-networkd[1540]: lxc_health: Gained carrier Dec 13 14:15:14.511013 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:15:14.516456 kubelet[2937]: I1213 14:15:14.516345 2937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-f5brp" podStartSLOduration=10.870766393 podStartE2EDuration="25.516284076s" podCreationTimestamp="2024-12-13 14:14:49 +0000 UTC" firstStartedPulling="2024-12-13 14:14:50.597717639 +0000 UTC m=+14.684229884" lastFinishedPulling="2024-12-13 14:15:05.243235322 +0000 UTC m=+29.329747567" observedRunningTime="2024-12-13 14:15:10.388520657 +0000 UTC m=+34.475032914" watchObservedRunningTime="2024-12-13 14:15:14.516284076 +0000 UTC m=+38.602796321" Dec 13 14:15:14.571764 systemd-networkd[1540]: cilium_vxlan: Gained IPv6LL Dec 13 14:15:15.143655 systemd-networkd[1540]: lxc7494ca9b04b6: Link UP Dec 13 14:15:15.154675 kernel: eth0: renamed from tmp18179 Dec 13 14:15:15.161897 systemd-networkd[1540]: lxc7494ca9b04b6: Gained carrier Dec 13 14:15:15.162611 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7494ca9b04b6: link becomes ready Dec 13 14:15:15.185726 systemd-networkd[1540]: lxce90130bf65f1: Link UP Dec 13 14:15:15.200647 kernel: eth0: renamed from tmpa258f Dec 13 14:15:15.216470 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce90130bf65f1: link becomes ready Dec 13 14:15:15.215325 systemd-networkd[1540]: lxce90130bf65f1: Gained carrier Dec 13 14:15:16.232816 systemd-networkd[1540]: lxc7494ca9b04b6: Gained IPv6LL Dec 13 14:15:16.488749 systemd-networkd[1540]: lxc_health: Gained IPv6LL Dec 13 14:15:17.128727 systemd-networkd[1540]: lxce90130bf65f1: Gained IPv6LL Dec 13 14:15:23.423724 env[1825]: time="2024-12-13T14:15:23.423233854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:23.423724 env[1825]: time="2024-12-13T14:15:23.423365547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:23.423724 env[1825]: time="2024-12-13T14:15:23.423395416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:23.429251 env[1825]: time="2024-12-13T14:15:23.426818612Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1817988b5e709bd78ba4bb052d919cf16bb48d67ca853248cb9836a03f6c8e13 pid=4103 runtime=io.containerd.runc.v2 Dec 13 14:15:23.471116 systemd[1]: Started cri-containerd-1817988b5e709bd78ba4bb052d919cf16bb48d67ca853248cb9836a03f6c8e13.scope. Dec 13 14:15:23.480053 env[1825]: time="2024-12-13T14:15:23.479924866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:23.480207 env[1825]: time="2024-12-13T14:15:23.480080727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:23.480207 env[1825]: time="2024-12-13T14:15:23.480171486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:23.480927 env[1825]: time="2024-12-13T14:15:23.480647242Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a258fa6f6bcc63696df29fa27680fd72a7f7c5c9753c672846659cfcbecb087a pid=4128 runtime=io.containerd.runc.v2 Dec 13 14:15:23.510200 systemd[1]: run-containerd-runc-k8s.io-1817988b5e709bd78ba4bb052d919cf16bb48d67ca853248cb9836a03f6c8e13-runc.OihzWK.mount: Deactivated successfully. Dec 13 14:15:23.536076 systemd[1]: Started cri-containerd-a258fa6f6bcc63696df29fa27680fd72a7f7c5c9753c672846659cfcbecb087a.scope. Dec 13 14:15:23.700453 env[1825]: time="2024-12-13T14:15:23.698500444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-96sks,Uid:7e34c595-0204-424f-a70c-85e904d8dbfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a258fa6f6bcc63696df29fa27680fd72a7f7c5c9753c672846659cfcbecb087a\"" Dec 13 14:15:23.718237 systemd[1]: Started sshd@5-172.31.24.251:22-139.178.89.65:37990.service. Dec 13 14:15:23.729130 env[1825]: time="2024-12-13T14:15:23.729028353Z" level=info msg="CreateContainer within sandbox \"a258fa6f6bcc63696df29fa27680fd72a7f7c5c9753c672846659cfcbecb087a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:15:23.754229 env[1825]: time="2024-12-13T14:15:23.754157273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-psq7n,Uid:c577a912-3ccc-4a5e-8450-b80c05d58048,Namespace:kube-system,Attempt:0,} returns sandbox id \"1817988b5e709bd78ba4bb052d919cf16bb48d67ca853248cb9836a03f6c8e13\"" Dec 13 14:15:23.767340 env[1825]: time="2024-12-13T14:15:23.767227958Z" level=info msg="CreateContainer within sandbox \"1817988b5e709bd78ba4bb052d919cf16bb48d67ca853248cb9836a03f6c8e13\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:15:23.800975 env[1825]: time="2024-12-13T14:15:23.800884874Z" level=info msg="CreateContainer within sandbox \"a258fa6f6bcc63696df29fa27680fd72a7f7c5c9753c672846659cfcbecb087a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"305668ec92c8d19038f1dff644b3c15bc9d13b1bc37019f639c8877b414d8ff4\"" Dec 13 14:15:23.802225 env[1825]: time="2024-12-13T14:15:23.802168076Z" level=info msg="StartContainer for \"305668ec92c8d19038f1dff644b3c15bc9d13b1bc37019f639c8877b414d8ff4\"" Dec 13 14:15:23.829789 env[1825]: time="2024-12-13T14:15:23.827439861Z" level=info msg="CreateContainer within sandbox \"1817988b5e709bd78ba4bb052d919cf16bb48d67ca853248cb9836a03f6c8e13\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f50c3aebd312a644b73238bec364c1be5804e12ec71a07c13318064cb52dbfd9\"" Dec 13 14:15:23.834040 env[1825]: time="2024-12-13T14:15:23.833984408Z" level=info msg="StartContainer for \"f50c3aebd312a644b73238bec364c1be5804e12ec71a07c13318064cb52dbfd9\"" Dec 13 14:15:23.889520 systemd[1]: Started cri-containerd-305668ec92c8d19038f1dff644b3c15bc9d13b1bc37019f639c8877b414d8ff4.scope. Dec 13 14:15:23.913440 systemd[1]: Started cri-containerd-f50c3aebd312a644b73238bec364c1be5804e12ec71a07c13318064cb52dbfd9.scope. Dec 13 14:15:23.939818 sshd[4182]: Accepted publickey for core from 139.178.89.65 port 37990 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:23.942855 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:23.955310 systemd-logind[1810]: New session 6 of user core. Dec 13 14:15:23.956518 systemd[1]: Started session-6.scope. Dec 13 14:15:24.038211 env[1825]: time="2024-12-13T14:15:24.038115334Z" level=info msg="StartContainer for \"f50c3aebd312a644b73238bec364c1be5804e12ec71a07c13318064cb52dbfd9\" returns successfully" Dec 13 14:15:24.054202 env[1825]: time="2024-12-13T14:15:24.054109843Z" level=info msg="StartContainer for \"305668ec92c8d19038f1dff644b3c15bc9d13b1bc37019f639c8877b414d8ff4\" returns successfully" Dec 13 14:15:24.306422 sshd[4182]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:24.311492 systemd[1]: sshd@5-172.31.24.251:22-139.178.89.65:37990.service: Deactivated successfully. Dec 13 14:15:24.312854 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:15:24.315054 systemd-logind[1810]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:15:24.316850 systemd-logind[1810]: Removed session 6. Dec 13 14:15:24.412088 kubelet[2937]: I1213 14:15:24.412025 2937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-96sks" podStartSLOduration=35.411943255 podStartE2EDuration="35.411943255s" podCreationTimestamp="2024-12-13 14:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:15:24.410470508 +0000 UTC m=+48.496982777" watchObservedRunningTime="2024-12-13 14:15:24.411943255 +0000 UTC m=+48.498455512" Dec 13 14:15:24.440102 systemd[1]: run-containerd-runc-k8s.io-a258fa6f6bcc63696df29fa27680fd72a7f7c5c9753c672846659cfcbecb087a-runc.Xv370K.mount: Deactivated successfully. Dec 13 14:15:24.445122 kubelet[2937]: I1213 14:15:24.445048 2937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-psq7n" podStartSLOduration=35.444965181 podStartE2EDuration="35.444965181s" podCreationTimestamp="2024-12-13 14:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:15:24.444269026 +0000 UTC m=+48.530781283" watchObservedRunningTime="2024-12-13 14:15:24.444965181 +0000 UTC m=+48.531477486" Dec 13 14:15:29.335774 systemd[1]: Started sshd@6-172.31.24.251:22-139.178.89.65:57472.service. Dec 13 14:15:29.506804 sshd[4282]: Accepted publickey for core from 139.178.89.65 port 57472 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:29.509951 sshd[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:29.517882 systemd-logind[1810]: New session 7 of user core. Dec 13 14:15:29.518608 systemd[1]: Started session-7.scope. Dec 13 14:15:29.765115 sshd[4282]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:29.771394 systemd-logind[1810]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:15:29.772076 systemd[1]: sshd@6-172.31.24.251:22-139.178.89.65:57472.service: Deactivated successfully. Dec 13 14:15:29.773329 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:15:29.777924 systemd-logind[1810]: Removed session 7. Dec 13 14:15:34.798912 systemd[1]: Started sshd@7-172.31.24.251:22-139.178.89.65:57482.service. Dec 13 14:15:34.976140 sshd[4295]: Accepted publickey for core from 139.178.89.65 port 57482 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:34.978793 sshd[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:34.986830 systemd-logind[1810]: New session 8 of user core. Dec 13 14:15:34.987726 systemd[1]: Started session-8.scope. Dec 13 14:15:35.244058 sshd[4295]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:35.248947 systemd[1]: sshd@7-172.31.24.251:22-139.178.89.65:57482.service: Deactivated successfully. Dec 13 14:15:35.251433 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:15:35.253444 systemd-logind[1810]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:15:35.256185 systemd-logind[1810]: Removed session 8. Dec 13 14:15:40.274516 systemd[1]: Started sshd@8-172.31.24.251:22-139.178.89.65:59596.service. Dec 13 14:15:40.451968 sshd[4310]: Accepted publickey for core from 139.178.89.65 port 59596 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:40.455056 sshd[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:40.463810 systemd[1]: Started session-9.scope. Dec 13 14:15:40.463986 systemd-logind[1810]: New session 9 of user core. Dec 13 14:15:40.714243 sshd[4310]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:40.719028 systemd-logind[1810]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:15:40.719690 systemd[1]: sshd@8-172.31.24.251:22-139.178.89.65:59596.service: Deactivated successfully. Dec 13 14:15:40.720968 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:15:40.723251 systemd-logind[1810]: Removed session 9. Dec 13 14:15:45.743632 systemd[1]: Started sshd@9-172.31.24.251:22-139.178.89.65:59606.service. Dec 13 14:15:45.919371 sshd[4322]: Accepted publickey for core from 139.178.89.65 port 59606 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:45.922642 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:45.931375 systemd[1]: Started session-10.scope. Dec 13 14:15:45.932531 systemd-logind[1810]: New session 10 of user core. Dec 13 14:15:46.183538 sshd[4322]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:46.188512 systemd[1]: sshd@9-172.31.24.251:22-139.178.89.65:59606.service: Deactivated successfully. Dec 13 14:15:46.189909 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:15:46.191442 systemd-logind[1810]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:15:46.193098 systemd-logind[1810]: Removed session 10. Dec 13 14:15:46.218255 systemd[1]: Started sshd@10-172.31.24.251:22-139.178.89.65:59622.service. Dec 13 14:15:46.392202 sshd[4334]: Accepted publickey for core from 139.178.89.65 port 59622 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:46.394990 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:46.403258 systemd-logind[1810]: New session 11 of user core. Dec 13 14:15:46.403937 systemd[1]: Started session-11.scope. Dec 13 14:15:46.736499 sshd[4334]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:46.742276 systemd[1]: sshd@10-172.31.24.251:22-139.178.89.65:59622.service: Deactivated successfully. Dec 13 14:15:46.743596 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:15:46.747082 systemd-logind[1810]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:15:46.748984 systemd-logind[1810]: Removed session 11. Dec 13 14:15:46.763124 systemd[1]: Started sshd@11-172.31.24.251:22-139.178.89.65:59638.service. Dec 13 14:15:46.958462 sshd[4344]: Accepted publickey for core from 139.178.89.65 port 59638 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:46.960995 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:46.970955 systemd[1]: Started session-12.scope. Dec 13 14:15:46.972665 systemd-logind[1810]: New session 12 of user core. Dec 13 14:15:47.219915 sshd[4344]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:47.225094 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:15:47.226667 systemd-logind[1810]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:15:47.227059 systemd[1]: sshd@11-172.31.24.251:22-139.178.89.65:59638.service: Deactivated successfully. Dec 13 14:15:47.229253 systemd-logind[1810]: Removed session 12. Dec 13 14:15:52.250024 systemd[1]: Started sshd@12-172.31.24.251:22-139.178.89.65:33326.service. Dec 13 14:15:52.426967 sshd[4359]: Accepted publickey for core from 139.178.89.65 port 33326 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:52.430199 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:52.438701 systemd-logind[1810]: New session 13 of user core. Dec 13 14:15:52.439125 systemd[1]: Started session-13.scope. Dec 13 14:15:52.694320 sshd[4359]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:52.699354 systemd[1]: sshd@12-172.31.24.251:22-139.178.89.65:33326.service: Deactivated successfully. Dec 13 14:15:52.700720 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:15:52.702331 systemd-logind[1810]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:15:52.705460 systemd-logind[1810]: Removed session 13. Dec 13 14:15:57.724137 systemd[1]: Started sshd@13-172.31.24.251:22-139.178.89.65:33336.service. Dec 13 14:15:57.900897 sshd[4371]: Accepted publickey for core from 139.178.89.65 port 33336 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:57.903469 sshd[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:57.912485 systemd[1]: Started session-14.scope. Dec 13 14:15:57.913455 systemd-logind[1810]: New session 14 of user core. Dec 13 14:15:58.160135 sshd[4371]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:58.166467 systemd[1]: sshd@13-172.31.24.251:22-139.178.89.65:33336.service: Deactivated successfully. Dec 13 14:15:58.167865 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:15:58.169002 systemd-logind[1810]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:15:58.170711 systemd-logind[1810]: Removed session 14. Dec 13 14:16:03.189681 systemd[1]: Started sshd@14-172.31.24.251:22-139.178.89.65:36788.service. Dec 13 14:16:03.361695 sshd[4383]: Accepted publickey for core from 139.178.89.65 port 36788 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:03.364039 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:03.372024 systemd-logind[1810]: New session 15 of user core. Dec 13 14:16:03.372937 systemd[1]: Started session-15.scope. Dec 13 14:16:03.634687 sshd[4383]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:03.640186 systemd-logind[1810]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:16:03.640827 systemd[1]: sshd@14-172.31.24.251:22-139.178.89.65:36788.service: Deactivated successfully. Dec 13 14:16:03.642133 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:16:03.644232 systemd-logind[1810]: Removed session 15. Dec 13 14:16:08.663490 systemd[1]: Started sshd@15-172.31.24.251:22-139.178.89.65:58548.service. Dec 13 14:16:08.834327 sshd[4395]: Accepted publickey for core from 139.178.89.65 port 58548 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:08.837590 sshd[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:08.846909 systemd-logind[1810]: New session 16 of user core. Dec 13 14:16:08.847025 systemd[1]: Started session-16.scope. Dec 13 14:16:09.093650 sshd[4395]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:09.098771 systemd-logind[1810]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:16:09.100131 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:16:09.101186 systemd-logind[1810]: Removed session 16. Dec 13 14:16:09.102043 systemd[1]: sshd@15-172.31.24.251:22-139.178.89.65:58548.service: Deactivated successfully. Dec 13 14:16:09.122511 systemd[1]: Started sshd@16-172.31.24.251:22-139.178.89.65:58560.service. Dec 13 14:16:09.295171 sshd[4407]: Accepted publickey for core from 139.178.89.65 port 58560 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:09.297745 sshd[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:09.305942 systemd-logind[1810]: New session 17 of user core. Dec 13 14:16:09.306835 systemd[1]: Started session-17.scope. Dec 13 14:16:09.604702 sshd[4407]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:09.610812 systemd[1]: sshd@16-172.31.24.251:22-139.178.89.65:58560.service: Deactivated successfully. Dec 13 14:16:09.612135 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:16:09.613293 systemd-logind[1810]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:16:09.615365 systemd-logind[1810]: Removed session 17. Dec 13 14:16:09.632376 systemd[1]: Started sshd@17-172.31.24.251:22-139.178.89.65:58562.service. Dec 13 14:16:09.803478 sshd[4417]: Accepted publickey for core from 139.178.89.65 port 58562 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:09.806624 sshd[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:09.814834 systemd-logind[1810]: New session 18 of user core. Dec 13 14:16:09.815342 systemd[1]: Started session-18.scope. Dec 13 14:16:12.422827 sshd[4417]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:12.428539 systemd[1]: sshd@17-172.31.24.251:22-139.178.89.65:58562.service: Deactivated successfully. Dec 13 14:16:12.429888 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:16:12.433090 systemd-logind[1810]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:16:12.436379 systemd-logind[1810]: Removed session 18. Dec 13 14:16:12.451370 systemd[1]: Started sshd@18-172.31.24.251:22-139.178.89.65:58578.service. Dec 13 14:16:12.639261 sshd[4435]: Accepted publickey for core from 139.178.89.65 port 58578 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:12.641888 sshd[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:12.649656 systemd-logind[1810]: New session 19 of user core. Dec 13 14:16:12.651325 systemd[1]: Started session-19.scope. Dec 13 14:16:13.147866 sshd[4435]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:13.153415 systemd-logind[1810]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:16:13.154026 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:16:13.156378 systemd[1]: sshd@18-172.31.24.251:22-139.178.89.65:58578.service: Deactivated successfully. Dec 13 14:16:13.159333 systemd-logind[1810]: Removed session 19. Dec 13 14:16:13.178373 systemd[1]: Started sshd@19-172.31.24.251:22-139.178.89.65:58582.service. Dec 13 14:16:13.353237 sshd[4445]: Accepted publickey for core from 139.178.89.65 port 58582 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:13.356329 sshd[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:13.364864 systemd[1]: Started session-20.scope. Dec 13 14:16:13.366373 systemd-logind[1810]: New session 20 of user core. Dec 13 14:16:13.628841 sshd[4445]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:13.634495 systemd[1]: sshd@19-172.31.24.251:22-139.178.89.65:58582.service: Deactivated successfully. Dec 13 14:16:13.635858 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:16:13.637901 systemd-logind[1810]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:16:13.640176 systemd-logind[1810]: Removed session 20. Dec 13 14:16:18.657941 systemd[1]: Started sshd@20-172.31.24.251:22-139.178.89.65:46882.service. Dec 13 14:16:18.835203 sshd[4457]: Accepted publickey for core from 139.178.89.65 port 46882 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:18.837792 sshd[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:18.845725 systemd-logind[1810]: New session 21 of user core. Dec 13 14:16:18.846608 systemd[1]: Started session-21.scope. Dec 13 14:16:19.107032 sshd[4457]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:19.112780 systemd[1]: sshd@20-172.31.24.251:22-139.178.89.65:46882.service: Deactivated successfully. Dec 13 14:16:19.114151 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:16:19.115405 systemd-logind[1810]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:16:19.117436 systemd-logind[1810]: Removed session 21. Dec 13 14:16:24.137296 systemd[1]: Started sshd@21-172.31.24.251:22-139.178.89.65:46890.service. Dec 13 14:16:24.315279 sshd[4474]: Accepted publickey for core from 139.178.89.65 port 46890 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:24.318440 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:24.325847 systemd-logind[1810]: New session 22 of user core. Dec 13 14:16:24.326902 systemd[1]: Started session-22.scope. Dec 13 14:16:24.581690 sshd[4474]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:24.586755 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:16:24.587984 systemd-logind[1810]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:16:24.588367 systemd[1]: sshd@21-172.31.24.251:22-139.178.89.65:46890.service: Deactivated successfully. Dec 13 14:16:24.591540 systemd-logind[1810]: Removed session 22. Dec 13 14:16:29.613820 systemd[1]: Started sshd@22-172.31.24.251:22-139.178.89.65:39418.service. Dec 13 14:16:29.789251 sshd[4487]: Accepted publickey for core from 139.178.89.65 port 39418 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:29.791885 sshd[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:29.800538 systemd[1]: Started session-23.scope. Dec 13 14:16:29.801699 systemd-logind[1810]: New session 23 of user core. Dec 13 14:16:30.048823 sshd[4487]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:30.053367 systemd[1]: sshd@22-172.31.24.251:22-139.178.89.65:39418.service: Deactivated successfully. Dec 13 14:16:30.055047 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:16:30.056333 systemd-logind[1810]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:16:30.058132 systemd-logind[1810]: Removed session 23. Dec 13 14:16:35.078463 systemd[1]: Started sshd@23-172.31.24.251:22-139.178.89.65:39420.service. Dec 13 14:16:35.255335 sshd[4500]: Accepted publickey for core from 139.178.89.65 port 39420 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:35.258077 sshd[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:35.268083 systemd[1]: Started session-24.scope. Dec 13 14:16:35.268905 systemd-logind[1810]: New session 24 of user core. Dec 13 14:16:35.521012 sshd[4500]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:35.526042 systemd-logind[1810]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:16:35.528125 systemd[1]: sshd@23-172.31.24.251:22-139.178.89.65:39420.service: Deactivated successfully. Dec 13 14:16:35.529437 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:16:35.531382 systemd-logind[1810]: Removed session 24. Dec 13 14:16:35.548837 systemd[1]: Started sshd@24-172.31.24.251:22-139.178.89.65:39424.service. Dec 13 14:16:35.721668 sshd[4512]: Accepted publickey for core from 139.178.89.65 port 39424 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:35.724738 sshd[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:35.733379 systemd[1]: Started session-25.scope. Dec 13 14:16:35.734635 systemd-logind[1810]: New session 25 of user core. Dec 13 14:16:38.098707 env[1825]: time="2024-12-13T14:16:38.098631343Z" level=info msg="StopContainer for \"98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f\" with timeout 30 (s)" Dec 13 14:16:38.099309 env[1825]: time="2024-12-13T14:16:38.099125700Z" level=info msg="Stop container \"98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f\" with signal terminated" Dec 13 14:16:38.119618 systemd[1]: run-containerd-runc-k8s.io-3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8-runc.UajxUA.mount: Deactivated successfully. Dec 13 14:16:38.130457 systemd[1]: cri-containerd-98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f.scope: Deactivated successfully. Dec 13 14:16:38.167032 env[1825]: time="2024-12-13T14:16:38.166935832Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:16:38.183805 env[1825]: time="2024-12-13T14:16:38.183751534Z" level=info msg="StopContainer for \"3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8\" with timeout 2 (s)" Dec 13 14:16:38.184532 env[1825]: time="2024-12-13T14:16:38.184486637Z" level=info msg="Stop container \"3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8\" with signal terminated" Dec 13 14:16:38.188002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f-rootfs.mount: Deactivated successfully. Dec 13 14:16:38.216918 env[1825]: time="2024-12-13T14:16:38.216857140Z" level=info msg="shim disconnected" id=98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f Dec 13 14:16:38.217323 env[1825]: time="2024-12-13T14:16:38.217284572Z" level=warning msg="cleaning up after shim disconnected" id=98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f namespace=k8s.io Dec 13 14:16:38.217458 env[1825]: time="2024-12-13T14:16:38.217429293Z" level=info msg="cleaning up dead shim" Dec 13 14:16:38.219517 systemd-networkd[1540]: lxc_health: Link DOWN Dec 13 14:16:38.219538 systemd-networkd[1540]: lxc_health: Lost carrier Dec 13 14:16:38.247427 env[1825]: time="2024-12-13T14:16:38.247372639Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4562 runtime=io.containerd.runc.v2\n" Dec 13 14:16:38.251950 env[1825]: time="2024-12-13T14:16:38.251892665Z" level=info msg="StopContainer for \"98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f\" returns successfully" Dec 13 14:16:38.253238 env[1825]: time="2024-12-13T14:16:38.253186555Z" level=info msg="StopPodSandbox for \"9151158ca0f36cd19229f0cbf113091bc074b944572eceddf2cb2a2fc12ce26c\"" Dec 13 14:16:38.253503 env[1825]: time="2024-12-13T14:16:38.253466301Z" level=info msg="Container to stop \"98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:38.255207 systemd[1]: cri-containerd-3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8.scope: Deactivated successfully. Dec 13 14:16:38.255827 systemd[1]: cri-containerd-3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8.scope: Consumed 14.027s CPU time. Dec 13 14:16:38.259615 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9151158ca0f36cd19229f0cbf113091bc074b944572eceddf2cb2a2fc12ce26c-shm.mount: Deactivated successfully. Dec 13 14:16:38.284267 systemd[1]: cri-containerd-9151158ca0f36cd19229f0cbf113091bc074b944572eceddf2cb2a2fc12ce26c.scope: Deactivated successfully. Dec 13 14:16:38.316447 env[1825]: time="2024-12-13T14:16:38.316385783Z" level=info msg="shim disconnected" id=3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8 Dec 13 14:16:38.316970 env[1825]: time="2024-12-13T14:16:38.316926160Z" level=warning msg="cleaning up after shim disconnected" id=3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8 namespace=k8s.io Dec 13 14:16:38.317120 env[1825]: time="2024-12-13T14:16:38.317090466Z" level=info msg="cleaning up dead shim" Dec 13 14:16:38.336949 env[1825]: time="2024-12-13T14:16:38.336860466Z" level=info msg="shim disconnected" id=9151158ca0f36cd19229f0cbf113091bc074b944572eceddf2cb2a2fc12ce26c Dec 13 14:16:38.336949 env[1825]: time="2024-12-13T14:16:38.336935671Z" level=warning msg="cleaning up after shim disconnected" id=9151158ca0f36cd19229f0cbf113091bc074b944572eceddf2cb2a2fc12ce26c namespace=k8s.io Dec 13 14:16:38.337391 env[1825]: time="2024-12-13T14:16:38.336958879Z" level=info msg="cleaning up dead shim" Dec 13 14:16:38.343808 env[1825]: time="2024-12-13T14:16:38.343750337Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4609 runtime=io.containerd.runc.v2\n" Dec 13 14:16:38.348420 env[1825]: time="2024-12-13T14:16:38.348358049Z" level=info msg="StopContainer for \"3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8\" returns successfully" Dec 13 14:16:38.351315 env[1825]: time="2024-12-13T14:16:38.349390011Z" level=info msg="StopPodSandbox for \"13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7\"" Dec 13 14:16:38.351766 env[1825]: time="2024-12-13T14:16:38.351707859Z" level=info msg="Container to stop \"f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:38.352198 env[1825]: time="2024-12-13T14:16:38.352144112Z" level=info msg="Container to stop \"c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:38.352427 env[1825]: time="2024-12-13T14:16:38.352388182Z" level=info msg="Container to stop \"3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:38.352654 env[1825]: time="2024-12-13T14:16:38.352598917Z" level=info msg="Container to stop \"564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:38.352838 env[1825]: time="2024-12-13T14:16:38.352797243Z" level=info msg="Container to stop \"cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:38.359522 env[1825]: time="2024-12-13T14:16:38.359440091Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4620 runtime=io.containerd.runc.v2\n" Dec 13 14:16:38.360160 env[1825]: time="2024-12-13T14:16:38.360099690Z" level=info msg="TearDown network for sandbox \"9151158ca0f36cd19229f0cbf113091bc074b944572eceddf2cb2a2fc12ce26c\" successfully" Dec 13 14:16:38.360299 env[1825]: time="2024-12-13T14:16:38.360154579Z" level=info msg="StopPodSandbox for \"9151158ca0f36cd19229f0cbf113091bc074b944572eceddf2cb2a2fc12ce26c\" returns successfully" Dec 13 14:16:38.377493 systemd[1]: cri-containerd-13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7.scope: Deactivated successfully. Dec 13 14:16:38.424888 env[1825]: time="2024-12-13T14:16:38.424825358Z" level=info msg="shim disconnected" id=13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7 Dec 13 14:16:38.425501 env[1825]: time="2024-12-13T14:16:38.425459961Z" level=warning msg="cleaning up after shim disconnected" id=13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7 namespace=k8s.io Dec 13 14:16:38.425705 env[1825]: time="2024-12-13T14:16:38.425674295Z" level=info msg="cleaning up dead shim" Dec 13 14:16:38.441384 env[1825]: time="2024-12-13T14:16:38.441325876Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4657 runtime=io.containerd.runc.v2\n" Dec 13 14:16:38.442179 env[1825]: time="2024-12-13T14:16:38.442134949Z" level=info msg="TearDown network for sandbox \"13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7\" successfully" Dec 13 14:16:38.442389 env[1825]: time="2024-12-13T14:16:38.442355979Z" level=info msg="StopPodSandbox for \"13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7\" returns successfully" Dec 13 14:16:38.523405 kubelet[2937]: I1213 14:16:38.523350 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/666fdc71-29e9-47f9-860d-0a6c19f70292-cilium-config-path\") pod \"666fdc71-29e9-47f9-860d-0a6c19f70292\" (UID: \"666fdc71-29e9-47f9-860d-0a6c19f70292\") " Dec 13 14:16:38.524042 kubelet[2937]: I1213 14:16:38.523424 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-host-proc-sys-net\") pod \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " Dec 13 14:16:38.524042 kubelet[2937]: I1213 14:16:38.523467 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-xtables-lock\") pod \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " Dec 13 14:16:38.524042 kubelet[2937]: I1213 14:16:38.523514 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cilium-config-path\") pod \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " Dec 13 14:16:38.524042 kubelet[2937]: I1213 14:16:38.523584 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-lib-modules\") pod \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " Dec 13 14:16:38.524042 kubelet[2937]: I1213 14:16:38.523638 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtfj2\" (UniqueName: \"kubernetes.io/projected/666fdc71-29e9-47f9-860d-0a6c19f70292-kube-api-access-mtfj2\") pod \"666fdc71-29e9-47f9-860d-0a6c19f70292\" (UID: \"666fdc71-29e9-47f9-860d-0a6c19f70292\") " Dec 13 14:16:38.524583 kubelet[2937]: I1213 14:16:38.524512 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9a9b2c97-7ce9-4375-9824-9fe189b1b749" (UID: "9a9b2c97-7ce9-4375-9824-9fe189b1b749"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:38.529260 kubelet[2937]: I1213 14:16:38.526783 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9a9b2c97-7ce9-4375-9824-9fe189b1b749" (UID: "9a9b2c97-7ce9-4375-9824-9fe189b1b749"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:38.529260 kubelet[2937]: I1213 14:16:38.526828 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9a9b2c97-7ce9-4375-9824-9fe189b1b749" (UID: "9a9b2c97-7ce9-4375-9824-9fe189b1b749"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:38.529536 kubelet[2937]: I1213 14:16:38.529434 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/666fdc71-29e9-47f9-860d-0a6c19f70292-kube-api-access-mtfj2" (OuterVolumeSpecName: "kube-api-access-mtfj2") pod "666fdc71-29e9-47f9-860d-0a6c19f70292" (UID: "666fdc71-29e9-47f9-860d-0a6c19f70292"). InnerVolumeSpecName "kube-api-access-mtfj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:16:38.531464 kubelet[2937]: I1213 14:16:38.531409 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/666fdc71-29e9-47f9-860d-0a6c19f70292-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "666fdc71-29e9-47f9-860d-0a6c19f70292" (UID: "666fdc71-29e9-47f9-860d-0a6c19f70292"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:16:38.537530 kubelet[2937]: I1213 14:16:38.537481 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9a9b2c97-7ce9-4375-9824-9fe189b1b749" (UID: "9a9b2c97-7ce9-4375-9824-9fe189b1b749"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:16:38.584181 kubelet[2937]: I1213 14:16:38.584120 2937 scope.go:117] "RemoveContainer" containerID="98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f" Dec 13 14:16:38.591722 env[1825]: time="2024-12-13T14:16:38.590592141Z" level=info msg="RemoveContainer for \"98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f\"" Dec 13 14:16:38.597792 systemd[1]: Removed slice kubepods-besteffort-pod666fdc71_29e9_47f9_860d_0a6c19f70292.slice. Dec 13 14:16:38.608153 env[1825]: time="2024-12-13T14:16:38.605798374Z" level=info msg="RemoveContainer for \"98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f\" returns successfully" Dec 13 14:16:38.609950 kubelet[2937]: I1213 14:16:38.609046 2937 scope.go:117] "RemoveContainer" containerID="98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f" Dec 13 14:16:38.609950 kubelet[2937]: E1213 14:16:38.609810 2937 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f\": not found" containerID="98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f" Dec 13 14:16:38.611462 env[1825]: time="2024-12-13T14:16:38.609442547Z" level=error msg="ContainerStatus for \"98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f\": not found" Dec 13 14:16:38.611626 kubelet[2937]: I1213 14:16:38.609974 2937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f"} err="failed to get container status \"98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f\": rpc error: code = NotFound desc = an error occurred when try to find container \"98e76a2989669d39eec48a15c8705a9f3ffb7f9b739f383bfc0a0533b0a1d47f\": not found" Dec 13 14:16:38.611626 kubelet[2937]: I1213 14:16:38.610003 2937 scope.go:117] "RemoveContainer" containerID="3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8" Dec 13 14:16:38.614436 env[1825]: time="2024-12-13T14:16:38.613543338Z" level=info msg="RemoveContainer for \"3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8\"" Dec 13 14:16:38.621597 env[1825]: time="2024-12-13T14:16:38.621094992Z" level=info msg="RemoveContainer for \"3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8\" returns successfully" Dec 13 14:16:38.621799 kubelet[2937]: I1213 14:16:38.621732 2937 scope.go:117] "RemoveContainer" containerID="c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b" Dec 13 14:16:38.624015 kubelet[2937]: I1213 14:16:38.623975 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-bpf-maps\") pod \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " Dec 13 14:16:38.624428 kubelet[2937]: I1213 14:16:38.624377 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a9b2c97-7ce9-4375-9824-9fe189b1b749-clustermesh-secrets\") pod \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " Dec 13 14:16:38.624534 kubelet[2937]: I1213 14:16:38.624460 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cilium-cgroup\") pod \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " Dec 13 14:16:38.624534 kubelet[2937]: I1213 14:16:38.624528 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-etc-cni-netd\") pod \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " Dec 13 14:16:38.624764 kubelet[2937]: I1213 14:16:38.624682 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcg5q\" (UniqueName: \"kubernetes.io/projected/9a9b2c97-7ce9-4375-9824-9fe189b1b749-kube-api-access-lcg5q\") pod \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " Dec 13 14:16:38.624764 kubelet[2937]: I1213 14:16:38.624730 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cni-path\") pod \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " Dec 13 14:16:38.624889 kubelet[2937]: I1213 14:16:38.624797 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cilium-run\") pod \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " Dec 13 14:16:38.624889 kubelet[2937]: I1213 14:16:38.624866 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a9b2c97-7ce9-4375-9824-9fe189b1b749-hubble-tls\") pod \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " Dec 13 14:16:38.625012 kubelet[2937]: I1213 14:16:38.624911 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-hostproc\") pod \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " Dec 13 14:16:38.627217 kubelet[2937]: I1213 14:16:38.627038 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-host-proc-sys-kernel\") pod \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\" (UID: \"9a9b2c97-7ce9-4375-9824-9fe189b1b749\") " Dec 13 14:16:38.627401 kubelet[2937]: I1213 14:16:38.627385 2937 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/666fdc71-29e9-47f9-860d-0a6c19f70292-cilium-config-path\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.627538 kubelet[2937]: I1213 14:16:38.627445 2937 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-host-proc-sys-net\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.627663 kubelet[2937]: I1213 14:16:38.627585 2937 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-xtables-lock\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.627748 kubelet[2937]: I1213 14:16:38.627683 2937 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cilium-config-path\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.627748 kubelet[2937]: I1213 14:16:38.627715 2937 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mtfj2\" (UniqueName: \"kubernetes.io/projected/666fdc71-29e9-47f9-860d-0a6c19f70292-kube-api-access-mtfj2\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.627888 kubelet[2937]: I1213 14:16:38.627742 2937 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-lib-modules\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.627950 kubelet[2937]: I1213 14:16:38.627925 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9a9b2c97-7ce9-4375-9824-9fe189b1b749" (UID: "9a9b2c97-7ce9-4375-9824-9fe189b1b749"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:38.628037 kubelet[2937]: I1213 14:16:38.624183 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9a9b2c97-7ce9-4375-9824-9fe189b1b749" (UID: "9a9b2c97-7ce9-4375-9824-9fe189b1b749"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:38.629630 kubelet[2937]: I1213 14:16:38.629536 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9a9b2c97-7ce9-4375-9824-9fe189b1b749" (UID: "9a9b2c97-7ce9-4375-9824-9fe189b1b749"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:38.631241 env[1825]: time="2024-12-13T14:16:38.631143811Z" level=info msg="RemoveContainer for \"c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b\"" Dec 13 14:16:38.634655 kubelet[2937]: I1213 14:16:38.629854 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9a9b2c97-7ce9-4375-9824-9fe189b1b749" (UID: "9a9b2c97-7ce9-4375-9824-9fe189b1b749"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:38.634941 kubelet[2937]: I1213 14:16:38.634859 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9a9b2c97-7ce9-4375-9824-9fe189b1b749" (UID: "9a9b2c97-7ce9-4375-9824-9fe189b1b749"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:38.635049 kubelet[2937]: I1213 14:16:38.634960 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cni-path" (OuterVolumeSpecName: "cni-path") pod "9a9b2c97-7ce9-4375-9824-9fe189b1b749" (UID: "9a9b2c97-7ce9-4375-9824-9fe189b1b749"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:38.638416 env[1825]: time="2024-12-13T14:16:38.638314053Z" level=info msg="RemoveContainer for \"c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b\" returns successfully" Dec 13 14:16:38.638951 kubelet[2937]: I1213 14:16:38.638850 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-hostproc" (OuterVolumeSpecName: "hostproc") pod "9a9b2c97-7ce9-4375-9824-9fe189b1b749" (UID: "9a9b2c97-7ce9-4375-9824-9fe189b1b749"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:38.639352 kubelet[2937]: I1213 14:16:38.639271 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9b2c97-7ce9-4375-9824-9fe189b1b749-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9a9b2c97-7ce9-4375-9824-9fe189b1b749" (UID: "9a9b2c97-7ce9-4375-9824-9fe189b1b749"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:16:38.639506 kubelet[2937]: I1213 14:16:38.639451 2937 scope.go:117] "RemoveContainer" containerID="f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750" Dec 13 14:16:38.642172 env[1825]: time="2024-12-13T14:16:38.642103405Z" level=info msg="RemoveContainer for \"f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750\"" Dec 13 14:16:38.642847 kubelet[2937]: I1213 14:16:38.642793 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a9b2c97-7ce9-4375-9824-9fe189b1b749-kube-api-access-lcg5q" (OuterVolumeSpecName: "kube-api-access-lcg5q") pod "9a9b2c97-7ce9-4375-9824-9fe189b1b749" (UID: "9a9b2c97-7ce9-4375-9824-9fe189b1b749"). InnerVolumeSpecName "kube-api-access-lcg5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:16:38.648263 kubelet[2937]: I1213 14:16:38.648138 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a9b2c97-7ce9-4375-9824-9fe189b1b749-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9a9b2c97-7ce9-4375-9824-9fe189b1b749" (UID: "9a9b2c97-7ce9-4375-9824-9fe189b1b749"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:16:38.652923 env[1825]: time="2024-12-13T14:16:38.652857472Z" level=info msg="RemoveContainer for \"f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750\" returns successfully" Dec 13 14:16:38.653318 kubelet[2937]: I1213 14:16:38.653274 2937 scope.go:117] "RemoveContainer" containerID="cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7" Dec 13 14:16:38.655736 env[1825]: time="2024-12-13T14:16:38.655677837Z" level=info msg="RemoveContainer for \"cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7\"" Dec 13 14:16:38.664639 env[1825]: time="2024-12-13T14:16:38.664521064Z" level=info msg="RemoveContainer for \"cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7\" returns successfully" Dec 13 14:16:38.665000 kubelet[2937]: I1213 14:16:38.664956 2937 scope.go:117] "RemoveContainer" containerID="564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588" Dec 13 14:16:38.667835 env[1825]: time="2024-12-13T14:16:38.667535195Z" level=info msg="RemoveContainer for \"564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588\"" Dec 13 14:16:38.673732 env[1825]: time="2024-12-13T14:16:38.673657910Z" level=info msg="RemoveContainer for \"564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588\" returns successfully" Dec 13 14:16:38.674130 kubelet[2937]: I1213 14:16:38.674077 2937 scope.go:117] "RemoveContainer" containerID="3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8" Dec 13 14:16:38.674650 env[1825]: time="2024-12-13T14:16:38.674528207Z" level=error msg="ContainerStatus for \"3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8\": not found" Dec 13 14:16:38.675078 kubelet[2937]: E1213 14:16:38.675020 2937 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8\": not found" containerID="3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8" Dec 13 14:16:38.675172 kubelet[2937]: I1213 14:16:38.675112 2937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8"} err="failed to get container status \"3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8\": not found" Dec 13 14:16:38.675172 kubelet[2937]: I1213 14:16:38.675159 2937 scope.go:117] "RemoveContainer" containerID="c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b" Dec 13 14:16:38.675679 env[1825]: time="2024-12-13T14:16:38.675592234Z" level=error msg="ContainerStatus for \"c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b\": not found" Dec 13 14:16:38.676178 kubelet[2937]: E1213 14:16:38.675988 2937 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b\": not found" containerID="c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b" Dec 13 14:16:38.676178 kubelet[2937]: I1213 14:16:38.676047 2937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b"} err="failed to get container status \"c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1a1032e48f6a934f31ed3a23e6f350c48088f2f646add135c8ae7195bdf8f8b\": not found" Dec 13 14:16:38.676178 kubelet[2937]: I1213 14:16:38.676069 2937 scope.go:117] "RemoveContainer" containerID="f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750" Dec 13 14:16:38.676686 env[1825]: time="2024-12-13T14:16:38.676602921Z" level=error msg="ContainerStatus for \"f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750\": not found" Dec 13 14:16:38.677078 kubelet[2937]: E1213 14:16:38.677041 2937 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750\": not found" containerID="f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750" Dec 13 14:16:38.677176 kubelet[2937]: I1213 14:16:38.677122 2937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750"} err="failed to get container status \"f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6af755de24147ae175cfa9989ba861dd1a3214e90400efdd1a73a75f114e750\": not found" Dec 13 14:16:38.677248 kubelet[2937]: I1213 14:16:38.677171 2937 scope.go:117] "RemoveContainer" containerID="cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7" Dec 13 14:16:38.677682 env[1825]: time="2024-12-13T14:16:38.677601415Z" level=error msg="ContainerStatus for \"cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7\": not found" Dec 13 14:16:38.678348 kubelet[2937]: E1213 14:16:38.678117 2937 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7\": not found" containerID="cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7" Dec 13 14:16:38.678348 kubelet[2937]: I1213 14:16:38.678174 2937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7"} err="failed to get container status \"cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc526783b2b73c50cac158072786d26f85a34746d00b00c14a4b906f188966d7\": not found" Dec 13 14:16:38.678348 kubelet[2937]: I1213 14:16:38.678217 2937 scope.go:117] "RemoveContainer" containerID="564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588" Dec 13 14:16:38.679126 env[1825]: time="2024-12-13T14:16:38.679024282Z" level=error msg="ContainerStatus for \"564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588\": not found" Dec 13 14:16:38.679360 kubelet[2937]: E1213 14:16:38.679327 2937 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588\": not found" containerID="564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588" Dec 13 14:16:38.679446 kubelet[2937]: I1213 14:16:38.679409 2937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588"} err="failed to get container status \"564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588\": rpc error: code = NotFound desc = an error occurred when try to find container \"564df5d5a05f0744a07dfa282ffbb67745597eb950712b11df41526a3ca13588\": not found" Dec 13 14:16:38.728856 kubelet[2937]: I1213 14:16:38.728798 2937 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-hostproc\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.728996 kubelet[2937]: I1213 14:16:38.728872 2937 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-host-proc-sys-kernel\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.728996 kubelet[2937]: I1213 14:16:38.728900 2937 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-bpf-maps\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.728996 kubelet[2937]: I1213 14:16:38.728953 2937 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a9b2c97-7ce9-4375-9824-9fe189b1b749-clustermesh-secrets\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.728996 kubelet[2937]: I1213 14:16:38.728981 2937 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cilium-cgroup\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.729333 kubelet[2937]: I1213 14:16:38.729029 2937 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-etc-cni-netd\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.729333 kubelet[2937]: I1213 14:16:38.729059 2937 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lcg5q\" (UniqueName: \"kubernetes.io/projected/9a9b2c97-7ce9-4375-9824-9fe189b1b749-kube-api-access-lcg5q\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.729333 kubelet[2937]: I1213 14:16:38.729082 2937 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cni-path\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.729333 kubelet[2937]: I1213 14:16:38.729131 2937 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a9b2c97-7ce9-4375-9824-9fe189b1b749-cilium-run\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.729333 kubelet[2937]: I1213 14:16:38.729158 2937 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a9b2c97-7ce9-4375-9824-9fe189b1b749-hubble-tls\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:38.910898 systemd[1]: Removed slice kubepods-burstable-pod9a9b2c97_7ce9_4375_9824_9fe189b1b749.slice. Dec 13 14:16:38.911119 systemd[1]: kubepods-burstable-pod9a9b2c97_7ce9_4375_9824_9fe189b1b749.slice: Consumed 14.246s CPU time. Dec 13 14:16:39.088397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d6ab77e03c60826643740295cb6f4e353d8f14f8ab5e0cc521b0893686cccd8-rootfs.mount: Deactivated successfully. Dec 13 14:16:39.088587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7-rootfs.mount: Deactivated successfully. Dec 13 14:16:39.088723 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13372ac4053010692d47646283b4c8d700099e4b72940a7719f1e5ac20a855a7-shm.mount: Deactivated successfully. Dec 13 14:16:39.088866 systemd[1]: var-lib-kubelet-pods-9a9b2c97\x2d7ce9\x2d4375\x2d9824\x2d9fe189b1b749-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:16:39.089023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9151158ca0f36cd19229f0cbf113091bc074b944572eceddf2cb2a2fc12ce26c-rootfs.mount: Deactivated successfully. Dec 13 14:16:39.089163 systemd[1]: var-lib-kubelet-pods-9a9b2c97\x2d7ce9\x2d4375\x2d9824\x2d9fe189b1b749-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlcg5q.mount: Deactivated successfully. Dec 13 14:16:39.089309 systemd[1]: var-lib-kubelet-pods-9a9b2c97\x2d7ce9\x2d4375\x2d9824\x2d9fe189b1b749-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:16:39.089454 systemd[1]: var-lib-kubelet-pods-666fdc71\x2d29e9\x2d47f9\x2d860d\x2d0a6c19f70292-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmtfj2.mount: Deactivated successfully. Dec 13 14:16:40.025224 sshd[4512]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:40.029519 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:16:40.029861 systemd[1]: session-25.scope: Consumed 1.579s CPU time. Dec 13 14:16:40.031062 systemd-logind[1810]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:16:40.031317 systemd[1]: sshd@24-172.31.24.251:22-139.178.89.65:39424.service: Deactivated successfully. Dec 13 14:16:40.033843 systemd-logind[1810]: Removed session 25. Dec 13 14:16:40.054133 systemd[1]: Started sshd@25-172.31.24.251:22-139.178.89.65:55352.service. Dec 13 14:16:40.200429 kubelet[2937]: I1213 14:16:40.200394 2937 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="666fdc71-29e9-47f9-860d-0a6c19f70292" path="/var/lib/kubelet/pods/666fdc71-29e9-47f9-860d-0a6c19f70292/volumes" Dec 13 14:16:40.202168 kubelet[2937]: I1213 14:16:40.202138 2937 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9a9b2c97-7ce9-4375-9824-9fe189b1b749" path="/var/lib/kubelet/pods/9a9b2c97-7ce9-4375-9824-9fe189b1b749/volumes" Dec 13 14:16:40.230622 sshd[4676]: Accepted publickey for core from 139.178.89.65 port 55352 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:40.232583 sshd[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:40.240652 systemd-logind[1810]: New session 26 of user core. Dec 13 14:16:40.241759 systemd[1]: Started session-26.scope. Dec 13 14:16:41.405279 kubelet[2937]: E1213 14:16:41.405244 2937 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:16:41.586212 sshd[4676]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:41.591795 systemd[1]: sshd@25-172.31.24.251:22-139.178.89.65:55352.service: Deactivated successfully. Dec 13 14:16:41.593216 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:16:41.593546 systemd[1]: session-26.scope: Consumed 1.116s CPU time. Dec 13 14:16:41.594971 systemd-logind[1810]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:16:41.597154 systemd-logind[1810]: Removed session 26. Dec 13 14:16:41.599269 kubelet[2937]: I1213 14:16:41.599223 2937 topology_manager.go:215] "Topology Admit Handler" podUID="2c7588d0-39bd-4534-8a9d-307ff118b3e3" podNamespace="kube-system" podName="cilium-l2tsb" Dec 13 14:16:41.599535 kubelet[2937]: E1213 14:16:41.599509 2937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="666fdc71-29e9-47f9-860d-0a6c19f70292" containerName="cilium-operator" Dec 13 14:16:41.599705 kubelet[2937]: E1213 14:16:41.599683 2937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a9b2c97-7ce9-4375-9824-9fe189b1b749" containerName="mount-cgroup" Dec 13 14:16:41.599827 kubelet[2937]: E1213 14:16:41.599806 2937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a9b2c97-7ce9-4375-9824-9fe189b1b749" containerName="apply-sysctl-overwrites" Dec 13 14:16:41.599944 kubelet[2937]: E1213 14:16:41.599923 2937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a9b2c97-7ce9-4375-9824-9fe189b1b749" containerName="mount-bpf-fs" Dec 13 14:16:41.600074 kubelet[2937]: E1213 14:16:41.600053 2937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a9b2c97-7ce9-4375-9824-9fe189b1b749" containerName="clean-cilium-state" Dec 13 14:16:41.600205 kubelet[2937]: E1213 14:16:41.600184 2937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a9b2c97-7ce9-4375-9824-9fe189b1b749" containerName="cilium-agent" Dec 13 14:16:41.600370 kubelet[2937]: I1213 14:16:41.600347 2937 memory_manager.go:354] "RemoveStaleState removing state" podUID="666fdc71-29e9-47f9-860d-0a6c19f70292" containerName="cilium-operator" Dec 13 14:16:41.600504 kubelet[2937]: I1213 14:16:41.600482 2937 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9b2c97-7ce9-4375-9824-9fe189b1b749" containerName="cilium-agent" Dec 13 14:16:41.628867 systemd[1]: Started sshd@26-172.31.24.251:22-139.178.89.65:55354.service. Dec 13 14:16:41.639261 systemd[1]: Created slice kubepods-burstable-pod2c7588d0_39bd_4534_8a9d_307ff118b3e3.slice. Dec 13 14:16:41.749981 kubelet[2937]: I1213 14:16:41.749738 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-host-proc-sys-net\") pod \"cilium-l2tsb\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " pod="kube-system/cilium-l2tsb" Dec 13 14:16:41.750150 kubelet[2937]: I1213 14:16:41.749999 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-host-proc-sys-kernel\") pod \"cilium-l2tsb\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " pod="kube-system/cilium-l2tsb" Dec 13 14:16:41.750249 kubelet[2937]: I1213 14:16:41.750158 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-cgroup\") pod \"cilium-l2tsb\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " pod="kube-system/cilium-l2tsb" Dec 13 14:16:41.750333 kubelet[2937]: I1213 14:16:41.750265 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cni-path\") pod \"cilium-l2tsb\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " pod="kube-system/cilium-l2tsb" Dec 13 14:16:41.751307 kubelet[2937]: I1213 14:16:41.750402 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-lib-modules\") pod \"cilium-l2tsb\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " pod="kube-system/cilium-l2tsb" Dec 13 14:16:41.751307 kubelet[2937]: I1213 14:16:41.750533 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-bpf-maps\") pod \"cilium-l2tsb\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " pod="kube-system/cilium-l2tsb" Dec 13 14:16:41.751307 kubelet[2937]: I1213 14:16:41.750785 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-run\") pod \"cilium-l2tsb\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " pod="kube-system/cilium-l2tsb" Dec 13 14:16:41.751307 kubelet[2937]: I1213 14:16:41.750866 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-ipsec-secrets\") pod \"cilium-l2tsb\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " pod="kube-system/cilium-l2tsb" Dec 13 14:16:41.751307 kubelet[2937]: I1213 14:16:41.750986 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wm9h\" (UniqueName: \"kubernetes.io/projected/2c7588d0-39bd-4534-8a9d-307ff118b3e3-kube-api-access-8wm9h\") pod \"cilium-l2tsb\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " pod="kube-system/cilium-l2tsb" Dec 13 14:16:41.751710 kubelet[2937]: I1213 14:16:41.751579 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-xtables-lock\") pod \"cilium-l2tsb\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " pod="kube-system/cilium-l2tsb" Dec 13 14:16:41.751710 kubelet[2937]: I1213 14:16:41.751686 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-hostproc\") pod \"cilium-l2tsb\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " pod="kube-system/cilium-l2tsb" Dec 13 14:16:41.751899 kubelet[2937]: I1213 14:16:41.751862 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c7588d0-39bd-4534-8a9d-307ff118b3e3-clustermesh-secrets\") pod \"cilium-l2tsb\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " pod="kube-system/cilium-l2tsb" Dec 13 14:16:41.751973 kubelet[2937]: I1213 14:16:41.751951 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c7588d0-39bd-4534-8a9d-307ff118b3e3-hubble-tls\") pod \"cilium-l2tsb\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " pod="kube-system/cilium-l2tsb" Dec 13 14:16:41.752140 kubelet[2937]: I1213 14:16:41.752108 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-etc-cni-netd\") pod \"cilium-l2tsb\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " pod="kube-system/cilium-l2tsb" Dec 13 14:16:41.752267 kubelet[2937]: I1213 14:16:41.752235 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-config-path\") pod \"cilium-l2tsb\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " pod="kube-system/cilium-l2tsb" Dec 13 14:16:41.813134 sshd[4686]: Accepted publickey for core from 139.178.89.65 port 55354 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:41.815730 sshd[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:41.829046 systemd[1]: Started session-27.scope. Dec 13 14:16:41.830373 systemd-logind[1810]: New session 27 of user core. Dec 13 14:16:41.955193 env[1825]: time="2024-12-13T14:16:41.954656530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l2tsb,Uid:2c7588d0-39bd-4534-8a9d-307ff118b3e3,Namespace:kube-system,Attempt:0,}" Dec 13 14:16:42.005743 env[1825]: time="2024-12-13T14:16:42.005420776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:16:42.005743 env[1825]: time="2024-12-13T14:16:42.005498068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:16:42.005743 env[1825]: time="2024-12-13T14:16:42.005524517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:16:42.007178 env[1825]: time="2024-12-13T14:16:42.007028840Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/69c6fab03af70cea63d0b237d3b22cfd2970dfe4d4f915b7f6843725e063ee8a pid=4707 runtime=io.containerd.runc.v2 Dec 13 14:16:42.037646 systemd[1]: Started cri-containerd-69c6fab03af70cea63d0b237d3b22cfd2970dfe4d4f915b7f6843725e063ee8a.scope. Dec 13 14:16:42.118328 env[1825]: time="2024-12-13T14:16:42.118272225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l2tsb,Uid:2c7588d0-39bd-4534-8a9d-307ff118b3e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"69c6fab03af70cea63d0b237d3b22cfd2970dfe4d4f915b7f6843725e063ee8a\"" Dec 13 14:16:42.129390 env[1825]: time="2024-12-13T14:16:42.129332113Z" level=info msg="CreateContainer within sandbox \"69c6fab03af70cea63d0b237d3b22cfd2970dfe4d4f915b7f6843725e063ee8a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:16:42.162476 env[1825]: time="2024-12-13T14:16:42.162364928Z" level=info msg="CreateContainer within sandbox \"69c6fab03af70cea63d0b237d3b22cfd2970dfe4d4f915b7f6843725e063ee8a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492\"" Dec 13 14:16:42.163763 env[1825]: time="2024-12-13T14:16:42.163684270Z" level=info msg="StartContainer for \"7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492\"" Dec 13 14:16:42.178826 sshd[4686]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:42.186313 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:16:42.187583 systemd[1]: sshd@26-172.31.24.251:22-139.178.89.65:55354.service: Deactivated successfully. Dec 13 14:16:42.190356 systemd-logind[1810]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:16:42.197702 systemd-logind[1810]: Removed session 27. Dec 13 14:16:42.206476 systemd[1]: Started sshd@27-172.31.24.251:22-139.178.89.65:55368.service. Dec 13 14:16:42.239730 systemd[1]: Started cri-containerd-7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492.scope. Dec 13 14:16:42.272924 systemd[1]: cri-containerd-7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492.scope: Deactivated successfully. Dec 13 14:16:42.299765 env[1825]: time="2024-12-13T14:16:42.299698592Z" level=info msg="shim disconnected" id=7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492 Dec 13 14:16:42.300204 env[1825]: time="2024-12-13T14:16:42.300168397Z" level=warning msg="cleaning up after shim disconnected" id=7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492 namespace=k8s.io Dec 13 14:16:42.300331 env[1825]: time="2024-12-13T14:16:42.300303986Z" level=info msg="cleaning up dead shim" Dec 13 14:16:42.314918 env[1825]: time="2024-12-13T14:16:42.314854805Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4769 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:16:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:16:42.315745 env[1825]: time="2024-12-13T14:16:42.315541440Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Dec 13 14:16:42.316170 env[1825]: time="2024-12-13T14:16:42.316090361Z" level=error msg="Failed to pipe stdout of container \"7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492\"" error="reading from a closed fifo" Dec 13 14:16:42.316386 env[1825]: time="2024-12-13T14:16:42.316341896Z" level=error msg="Failed to pipe stderr of container \"7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492\"" error="reading from a closed fifo" Dec 13 14:16:42.320023 env[1825]: time="2024-12-13T14:16:42.319915328Z" level=error msg="StartContainer for \"7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:16:42.320646 kubelet[2937]: E1213 14:16:42.320378 2937 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492" Dec 13 14:16:42.320646 kubelet[2937]: E1213 14:16:42.320536 2937 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:16:42.320646 kubelet[2937]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:16:42.320646 kubelet[2937]: rm /hostbin/cilium-mount Dec 13 14:16:42.320980 kubelet[2937]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8wm9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-l2tsb_kube-system(2c7588d0-39bd-4534-8a9d-307ff118b3e3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:16:42.321319 kubelet[2937]: E1213 14:16:42.321219 2937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-l2tsb" podUID="2c7588d0-39bd-4534-8a9d-307ff118b3e3" Dec 13 14:16:42.399116 sshd[4757]: Accepted publickey for core from 139.178.89.65 port 55368 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:42.403032 sshd[4757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:42.411119 systemd-logind[1810]: New session 28 of user core. Dec 13 14:16:42.411980 systemd[1]: Started session-28.scope. Dec 13 14:16:42.617717 env[1825]: time="2024-12-13T14:16:42.616693862Z" level=info msg="StopPodSandbox for \"69c6fab03af70cea63d0b237d3b22cfd2970dfe4d4f915b7f6843725e063ee8a\"" Dec 13 14:16:42.617717 env[1825]: time="2024-12-13T14:16:42.616795179Z" level=info msg="Container to stop \"7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:42.636767 systemd[1]: cri-containerd-69c6fab03af70cea63d0b237d3b22cfd2970dfe4d4f915b7f6843725e063ee8a.scope: Deactivated successfully. Dec 13 14:16:42.714509 env[1825]: time="2024-12-13T14:16:42.714421872Z" level=info msg="shim disconnected" id=69c6fab03af70cea63d0b237d3b22cfd2970dfe4d4f915b7f6843725e063ee8a Dec 13 14:16:42.714509 env[1825]: time="2024-12-13T14:16:42.714503640Z" level=warning msg="cleaning up after shim disconnected" id=69c6fab03af70cea63d0b237d3b22cfd2970dfe4d4f915b7f6843725e063ee8a namespace=k8s.io Dec 13 14:16:42.714909 env[1825]: time="2024-12-13T14:16:42.714526993Z" level=info msg="cleaning up dead shim" Dec 13 14:16:42.731096 env[1825]: time="2024-12-13T14:16:42.731018614Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4809 runtime=io.containerd.runc.v2\n" Dec 13 14:16:42.731698 env[1825]: time="2024-12-13T14:16:42.731650565Z" level=info msg="TearDown network for sandbox \"69c6fab03af70cea63d0b237d3b22cfd2970dfe4d4f915b7f6843725e063ee8a\" successfully" Dec 13 14:16:42.731811 env[1825]: time="2024-12-13T14:16:42.731699741Z" level=info msg="StopPodSandbox for \"69c6fab03af70cea63d0b237d3b22cfd2970dfe4d4f915b7f6843725e063ee8a\" returns successfully" Dec 13 14:16:42.770293 kubelet[2937]: I1213 14:16:42.770244 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cni-path\") pod \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " Dec 13 14:16:42.771111 kubelet[2937]: I1213 14:16:42.771075 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-bpf-maps\") pod \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " Dec 13 14:16:42.771291 kubelet[2937]: I1213 14:16:42.771267 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-xtables-lock\") pod \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " Dec 13 14:16:42.771466 kubelet[2937]: I1213 14:16:42.771443 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-ipsec-secrets\") pod \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " Dec 13 14:16:42.771610 kubelet[2937]: I1213 14:16:42.770344 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cni-path" (OuterVolumeSpecName: "cni-path") pod "2c7588d0-39bd-4534-8a9d-307ff118b3e3" (UID: "2c7588d0-39bd-4534-8a9d-307ff118b3e3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:42.771711 kubelet[2937]: I1213 14:16:42.771133 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2c7588d0-39bd-4534-8a9d-307ff118b3e3" (UID: "2c7588d0-39bd-4534-8a9d-307ff118b3e3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:42.771839 kubelet[2937]: I1213 14:16:42.771298 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2c7588d0-39bd-4534-8a9d-307ff118b3e3" (UID: "2c7588d0-39bd-4534-8a9d-307ff118b3e3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:42.772075 kubelet[2937]: I1213 14:16:42.771995 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-lib-modules\") pod \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " Dec 13 14:16:42.772199 kubelet[2937]: I1213 14:16:42.772162 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-config-path\") pod \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " Dec 13 14:16:42.772300 kubelet[2937]: I1213 14:16:42.772221 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-hostproc\") pod \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " Dec 13 14:16:42.772300 kubelet[2937]: I1213 14:16:42.772270 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-host-proc-sys-kernel\") pod \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " Dec 13 14:16:42.772432 kubelet[2937]: I1213 14:16:42.772309 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-run\") pod \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " Dec 13 14:16:42.772432 kubelet[2937]: I1213 14:16:42.772354 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c7588d0-39bd-4534-8a9d-307ff118b3e3-clustermesh-secrets\") pod \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " Dec 13 14:16:42.772432 kubelet[2937]: I1213 14:16:42.772399 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c7588d0-39bd-4534-8a9d-307ff118b3e3-hubble-tls\") pod \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " Dec 13 14:16:42.772672 kubelet[2937]: I1213 14:16:42.772439 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-host-proc-sys-net\") pod \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " Dec 13 14:16:42.772672 kubelet[2937]: I1213 14:16:42.772483 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-cgroup\") pod \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " Dec 13 14:16:42.772672 kubelet[2937]: I1213 14:16:42.772531 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wm9h\" (UniqueName: \"kubernetes.io/projected/2c7588d0-39bd-4534-8a9d-307ff118b3e3-kube-api-access-8wm9h\") pod \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " Dec 13 14:16:42.772672 kubelet[2937]: I1213 14:16:42.772602 2937 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-etc-cni-netd\") pod \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\" (UID: \"2c7588d0-39bd-4534-8a9d-307ff118b3e3\") " Dec 13 14:16:42.772907 kubelet[2937]: I1213 14:16:42.772675 2937 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-bpf-maps\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:42.772907 kubelet[2937]: I1213 14:16:42.772705 2937 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-xtables-lock\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:42.772907 kubelet[2937]: I1213 14:16:42.772746 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2c7588d0-39bd-4534-8a9d-307ff118b3e3" (UID: "2c7588d0-39bd-4534-8a9d-307ff118b3e3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:42.772907 kubelet[2937]: I1213 14:16:42.772794 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2c7588d0-39bd-4534-8a9d-307ff118b3e3" (UID: "2c7588d0-39bd-4534-8a9d-307ff118b3e3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:42.775770 kubelet[2937]: I1213 14:16:42.775720 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-hostproc" (OuterVolumeSpecName: "hostproc") pod "2c7588d0-39bd-4534-8a9d-307ff118b3e3" (UID: "2c7588d0-39bd-4534-8a9d-307ff118b3e3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:42.777063 kubelet[2937]: I1213 14:16:42.776982 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2c7588d0-39bd-4534-8a9d-307ff118b3e3" (UID: "2c7588d0-39bd-4534-8a9d-307ff118b3e3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:42.777309 kubelet[2937]: I1213 14:16:42.777278 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2c7588d0-39bd-4534-8a9d-307ff118b3e3" (UID: "2c7588d0-39bd-4534-8a9d-307ff118b3e3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:42.785033 kubelet[2937]: I1213 14:16:42.783589 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2c7588d0-39bd-4534-8a9d-307ff118b3e3" (UID: "2c7588d0-39bd-4534-8a9d-307ff118b3e3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:16:42.786815 kubelet[2937]: I1213 14:16:42.783851 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2c7588d0-39bd-4534-8a9d-307ff118b3e3" (UID: "2c7588d0-39bd-4534-8a9d-307ff118b3e3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:42.787054 kubelet[2937]: I1213 14:16:42.783900 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2c7588d0-39bd-4534-8a9d-307ff118b3e3" (UID: "2c7588d0-39bd-4534-8a9d-307ff118b3e3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:42.790013 kubelet[2937]: I1213 14:16:42.789883 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7588d0-39bd-4534-8a9d-307ff118b3e3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2c7588d0-39bd-4534-8a9d-307ff118b3e3" (UID: "2c7588d0-39bd-4534-8a9d-307ff118b3e3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:16:42.790579 kubelet[2937]: I1213 14:16:42.790502 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2c7588d0-39bd-4534-8a9d-307ff118b3e3" (UID: "2c7588d0-39bd-4534-8a9d-307ff118b3e3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:16:42.795194 kubelet[2937]: I1213 14:16:42.795126 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c7588d0-39bd-4534-8a9d-307ff118b3e3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2c7588d0-39bd-4534-8a9d-307ff118b3e3" (UID: "2c7588d0-39bd-4534-8a9d-307ff118b3e3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:16:42.795531 kubelet[2937]: I1213 14:16:42.795496 2937 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c7588d0-39bd-4534-8a9d-307ff118b3e3-kube-api-access-8wm9h" (OuterVolumeSpecName: "kube-api-access-8wm9h") pod "2c7588d0-39bd-4534-8a9d-307ff118b3e3" (UID: "2c7588d0-39bd-4534-8a9d-307ff118b3e3"). InnerVolumeSpecName "kube-api-access-8wm9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:16:42.873615 kubelet[2937]: I1213 14:16:42.872969 2937 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-hostproc\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:42.873615 kubelet[2937]: I1213 14:16:42.873032 2937 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-run\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:42.873615 kubelet[2937]: I1213 14:16:42.873062 2937 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-host-proc-sys-kernel\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:42.873615 kubelet[2937]: I1213 14:16:42.873091 2937 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-host-proc-sys-net\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:42.873615 kubelet[2937]: I1213 14:16:42.873116 2937 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-cgroup\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:42.873615 kubelet[2937]: I1213 14:16:42.873143 2937 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8wm9h\" (UniqueName: \"kubernetes.io/projected/2c7588d0-39bd-4534-8a9d-307ff118b3e3-kube-api-access-8wm9h\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:42.873615 kubelet[2937]: I1213 14:16:42.873169 2937 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c7588d0-39bd-4534-8a9d-307ff118b3e3-clustermesh-secrets\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:42.873615 kubelet[2937]: I1213 14:16:42.873195 2937 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c7588d0-39bd-4534-8a9d-307ff118b3e3-hubble-tls\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:42.874197 kubelet[2937]: I1213 14:16:42.873219 2937 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-etc-cni-netd\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:42.874197 kubelet[2937]: I1213 14:16:42.873244 2937 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cni-path\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:42.874197 kubelet[2937]: I1213 14:16:42.873268 2937 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-ipsec-secrets\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:42.874197 kubelet[2937]: I1213 14:16:42.873294 2937 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c7588d0-39bd-4534-8a9d-307ff118b3e3-cilium-config-path\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:42.874197 kubelet[2937]: I1213 14:16:42.873319 2937 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c7588d0-39bd-4534-8a9d-307ff118b3e3-lib-modules\") on node \"ip-172-31-24-251\" DevicePath \"\"" Dec 13 14:16:42.874955 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69c6fab03af70cea63d0b237d3b22cfd2970dfe4d4f915b7f6843725e063ee8a-shm.mount: Deactivated successfully. Dec 13 14:16:42.875160 systemd[1]: var-lib-kubelet-pods-2c7588d0\x2d39bd\x2d4534\x2d8a9d\x2d307ff118b3e3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:16:42.875302 systemd[1]: var-lib-kubelet-pods-2c7588d0\x2d39bd\x2d4534\x2d8a9d\x2d307ff118b3e3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8wm9h.mount: Deactivated successfully. Dec 13 14:16:42.875435 systemd[1]: var-lib-kubelet-pods-2c7588d0\x2d39bd\x2d4534\x2d8a9d\x2d307ff118b3e3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:16:42.875595 systemd[1]: var-lib-kubelet-pods-2c7588d0\x2d39bd\x2d4534\x2d8a9d\x2d307ff118b3e3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:16:43.620997 kubelet[2937]: I1213 14:16:43.620960 2937 scope.go:117] "RemoveContainer" containerID="7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492" Dec 13 14:16:43.628525 env[1825]: time="2024-12-13T14:16:43.628059940Z" level=info msg="RemoveContainer for \"7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492\"" Dec 13 14:16:43.630170 systemd[1]: Removed slice kubepods-burstable-pod2c7588d0_39bd_4534_8a9d_307ff118b3e3.slice. Dec 13 14:16:43.635891 env[1825]: time="2024-12-13T14:16:43.635811934Z" level=info msg="RemoveContainer for \"7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492\" returns successfully" Dec 13 14:16:43.691875 kubelet[2937]: I1213 14:16:43.691807 2937 topology_manager.go:215] "Topology Admit Handler" podUID="ce5e1f36-66af-40cd-864f-99dd3a3b2c6f" podNamespace="kube-system" podName="cilium-xwmxk" Dec 13 14:16:43.692176 kubelet[2937]: E1213 14:16:43.692130 2937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c7588d0-39bd-4534-8a9d-307ff118b3e3" containerName="mount-cgroup" Dec 13 14:16:43.692395 kubelet[2937]: I1213 14:16:43.692360 2937 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c7588d0-39bd-4534-8a9d-307ff118b3e3" containerName="mount-cgroup" Dec 13 14:16:43.705141 systemd[1]: Created slice kubepods-burstable-podce5e1f36_66af_40cd_864f_99dd3a3b2c6f.slice. Dec 13 14:16:43.778024 kubelet[2937]: I1213 14:16:43.777944 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbvbg\" (UniqueName: \"kubernetes.io/projected/ce5e1f36-66af-40cd-864f-99dd3a3b2c6f-kube-api-access-kbvbg\") pod \"cilium-xwmxk\" (UID: \"ce5e1f36-66af-40cd-864f-99dd3a3b2c6f\") " pod="kube-system/cilium-xwmxk" Dec 13 14:16:43.778619 kubelet[2937]: I1213 14:16:43.778078 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce5e1f36-66af-40cd-864f-99dd3a3b2c6f-cni-path\") pod \"cilium-xwmxk\" (UID: \"ce5e1f36-66af-40cd-864f-99dd3a3b2c6f\") " pod="kube-system/cilium-xwmxk" Dec 13 14:16:43.778619 kubelet[2937]: I1213 14:16:43.778136 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce5e1f36-66af-40cd-864f-99dd3a3b2c6f-cilium-run\") pod \"cilium-xwmxk\" (UID: \"ce5e1f36-66af-40cd-864f-99dd3a3b2c6f\") " pod="kube-system/cilium-xwmxk" Dec 13 14:16:43.778619 kubelet[2937]: I1213 14:16:43.778210 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce5e1f36-66af-40cd-864f-99dd3a3b2c6f-xtables-lock\") pod \"cilium-xwmxk\" (UID: \"ce5e1f36-66af-40cd-864f-99dd3a3b2c6f\") " pod="kube-system/cilium-xwmxk" Dec 13 14:16:43.778619 kubelet[2937]: I1213 14:16:43.778292 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce5e1f36-66af-40cd-864f-99dd3a3b2c6f-bpf-maps\") pod \"cilium-xwmxk\" (UID: \"ce5e1f36-66af-40cd-864f-99dd3a3b2c6f\") " pod="kube-system/cilium-xwmxk" Dec 13 14:16:43.778619 kubelet[2937]: I1213 14:16:43.778366 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce5e1f36-66af-40cd-864f-99dd3a3b2c6f-hubble-tls\") pod \"cilium-xwmxk\" (UID: \"ce5e1f36-66af-40cd-864f-99dd3a3b2c6f\") " pod="kube-system/cilium-xwmxk" Dec 13 14:16:43.778619 kubelet[2937]: I1213 14:16:43.778438 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce5e1f36-66af-40cd-864f-99dd3a3b2c6f-clustermesh-secrets\") pod \"cilium-xwmxk\" (UID: \"ce5e1f36-66af-40cd-864f-99dd3a3b2c6f\") " pod="kube-system/cilium-xwmxk" Dec 13 14:16:43.779020 kubelet[2937]: I1213 14:16:43.778490 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ce5e1f36-66af-40cd-864f-99dd3a3b2c6f-cilium-ipsec-secrets\") pod \"cilium-xwmxk\" (UID: \"ce5e1f36-66af-40cd-864f-99dd3a3b2c6f\") " pod="kube-system/cilium-xwmxk" Dec 13 14:16:43.779020 kubelet[2937]: I1213 14:16:43.778593 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce5e1f36-66af-40cd-864f-99dd3a3b2c6f-cilium-cgroup\") pod \"cilium-xwmxk\" (UID: \"ce5e1f36-66af-40cd-864f-99dd3a3b2c6f\") " pod="kube-system/cilium-xwmxk" Dec 13 14:16:43.779020 kubelet[2937]: I1213 14:16:43.778664 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce5e1f36-66af-40cd-864f-99dd3a3b2c6f-lib-modules\") pod \"cilium-xwmxk\" (UID: \"ce5e1f36-66af-40cd-864f-99dd3a3b2c6f\") " pod="kube-system/cilium-xwmxk" Dec 13 14:16:43.779020 kubelet[2937]: I1213 14:16:43.778715 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce5e1f36-66af-40cd-864f-99dd3a3b2c6f-cilium-config-path\") pod \"cilium-xwmxk\" (UID: \"ce5e1f36-66af-40cd-864f-99dd3a3b2c6f\") " pod="kube-system/cilium-xwmxk" Dec 13 14:16:43.779020 kubelet[2937]: I1213 14:16:43.778789 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce5e1f36-66af-40cd-864f-99dd3a3b2c6f-host-proc-sys-net\") pod \"cilium-xwmxk\" (UID: \"ce5e1f36-66af-40cd-864f-99dd3a3b2c6f\") " pod="kube-system/cilium-xwmxk" Dec 13 14:16:43.779305 kubelet[2937]: I1213 14:16:43.778862 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce5e1f36-66af-40cd-864f-99dd3a3b2c6f-host-proc-sys-kernel\") pod \"cilium-xwmxk\" (UID: \"ce5e1f36-66af-40cd-864f-99dd3a3b2c6f\") " pod="kube-system/cilium-xwmxk" Dec 13 14:16:43.779305 kubelet[2937]: I1213 14:16:43.778934 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce5e1f36-66af-40cd-864f-99dd3a3b2c6f-hostproc\") pod \"cilium-xwmxk\" (UID: \"ce5e1f36-66af-40cd-864f-99dd3a3b2c6f\") " pod="kube-system/cilium-xwmxk" Dec 13 14:16:43.779305 kubelet[2937]: I1213 14:16:43.779012 2937 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce5e1f36-66af-40cd-864f-99dd3a3b2c6f-etc-cni-netd\") pod \"cilium-xwmxk\" (UID: \"ce5e1f36-66af-40cd-864f-99dd3a3b2c6f\") " pod="kube-system/cilium-xwmxk" Dec 13 14:16:44.011531 env[1825]: time="2024-12-13T14:16:44.010970863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xwmxk,Uid:ce5e1f36-66af-40cd-864f-99dd3a3b2c6f,Namespace:kube-system,Attempt:0,}" Dec 13 14:16:44.050680 env[1825]: time="2024-12-13T14:16:44.046513272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:16:44.050680 env[1825]: time="2024-12-13T14:16:44.046695697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:16:44.050680 env[1825]: time="2024-12-13T14:16:44.046724882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:16:44.050680 env[1825]: time="2024-12-13T14:16:44.046958056Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2b576306392bcbbcf7a692d7e31ed82f3e9eaa01ef0daf0663ed2422666c3b7 pid=4837 runtime=io.containerd.runc.v2 Dec 13 14:16:44.079464 systemd[1]: Started cri-containerd-b2b576306392bcbbcf7a692d7e31ed82f3e9eaa01ef0daf0663ed2422666c3b7.scope. Dec 13 14:16:44.142010 env[1825]: time="2024-12-13T14:16:44.141951767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xwmxk,Uid:ce5e1f36-66af-40cd-864f-99dd3a3b2c6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2b576306392bcbbcf7a692d7e31ed82f3e9eaa01ef0daf0663ed2422666c3b7\"" Dec 13 14:16:44.149881 env[1825]: time="2024-12-13T14:16:44.149818841Z" level=info msg="CreateContainer within sandbox \"b2b576306392bcbbcf7a692d7e31ed82f3e9eaa01ef0daf0663ed2422666c3b7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:16:44.179568 env[1825]: time="2024-12-13T14:16:44.179492015Z" level=info msg="CreateContainer within sandbox \"b2b576306392bcbbcf7a692d7e31ed82f3e9eaa01ef0daf0663ed2422666c3b7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0db5536d7ed0247a02c300e9f9cfe36881201e694301f35a0b55050b10a32b0e\"" Dec 13 14:16:44.180904 env[1825]: time="2024-12-13T14:16:44.180848761Z" level=info msg="StartContainer for \"0db5536d7ed0247a02c300e9f9cfe36881201e694301f35a0b55050b10a32b0e\"" Dec 13 14:16:44.200253 kubelet[2937]: I1213 14:16:44.200206 2937 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2c7588d0-39bd-4534-8a9d-307ff118b3e3" path="/var/lib/kubelet/pods/2c7588d0-39bd-4534-8a9d-307ff118b3e3/volumes" Dec 13 14:16:44.218956 systemd[1]: Started cri-containerd-0db5536d7ed0247a02c300e9f9cfe36881201e694301f35a0b55050b10a32b0e.scope. Dec 13 14:16:44.275472 env[1825]: time="2024-12-13T14:16:44.275338058Z" level=info msg="StartContainer for \"0db5536d7ed0247a02c300e9f9cfe36881201e694301f35a0b55050b10a32b0e\" returns successfully" Dec 13 14:16:44.294040 systemd[1]: cri-containerd-0db5536d7ed0247a02c300e9f9cfe36881201e694301f35a0b55050b10a32b0e.scope: Deactivated successfully. Dec 13 14:16:44.347514 env[1825]: time="2024-12-13T14:16:44.347446246Z" level=info msg="shim disconnected" id=0db5536d7ed0247a02c300e9f9cfe36881201e694301f35a0b55050b10a32b0e Dec 13 14:16:44.347821 env[1825]: time="2024-12-13T14:16:44.347514227Z" level=warning msg="cleaning up after shim disconnected" id=0db5536d7ed0247a02c300e9f9cfe36881201e694301f35a0b55050b10a32b0e namespace=k8s.io Dec 13 14:16:44.347821 env[1825]: time="2024-12-13T14:16:44.347536931Z" level=info msg="cleaning up dead shim" Dec 13 14:16:44.363048 env[1825]: time="2024-12-13T14:16:44.362975168Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4919 runtime=io.containerd.runc.v2\n" Dec 13 14:16:44.640670 env[1825]: time="2024-12-13T14:16:44.638931935Z" level=info msg="CreateContainer within sandbox \"b2b576306392bcbbcf7a692d7e31ed82f3e9eaa01ef0daf0663ed2422666c3b7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:16:44.667805 env[1825]: time="2024-12-13T14:16:44.667723616Z" level=info msg="CreateContainer within sandbox \"b2b576306392bcbbcf7a692d7e31ed82f3e9eaa01ef0daf0663ed2422666c3b7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"32e8adaf77dd2c31dc2bb2068e8aad88311a7f2fec75c686bc5a603d70e0da94\"" Dec 13 14:16:44.668752 env[1825]: time="2024-12-13T14:16:44.668705694Z" level=info msg="StartContainer for \"32e8adaf77dd2c31dc2bb2068e8aad88311a7f2fec75c686bc5a603d70e0da94\"" Dec 13 14:16:44.712838 systemd[1]: Started cri-containerd-32e8adaf77dd2c31dc2bb2068e8aad88311a7f2fec75c686bc5a603d70e0da94.scope. Dec 13 14:16:44.788866 env[1825]: time="2024-12-13T14:16:44.788793718Z" level=info msg="StartContainer for \"32e8adaf77dd2c31dc2bb2068e8aad88311a7f2fec75c686bc5a603d70e0da94\" returns successfully" Dec 13 14:16:44.803918 systemd[1]: cri-containerd-32e8adaf77dd2c31dc2bb2068e8aad88311a7f2fec75c686bc5a603d70e0da94.scope: Deactivated successfully. Dec 13 14:16:44.851998 env[1825]: time="2024-12-13T14:16:44.851935392Z" level=info msg="shim disconnected" id=32e8adaf77dd2c31dc2bb2068e8aad88311a7f2fec75c686bc5a603d70e0da94 Dec 13 14:16:44.852469 env[1825]: time="2024-12-13T14:16:44.852424193Z" level=warning msg="cleaning up after shim disconnected" id=32e8adaf77dd2c31dc2bb2068e8aad88311a7f2fec75c686bc5a603d70e0da94 namespace=k8s.io Dec 13 14:16:44.852664 env[1825]: time="2024-12-13T14:16:44.852635419Z" level=info msg="cleaning up dead shim" Dec 13 14:16:44.867368 env[1825]: time="2024-12-13T14:16:44.867310917Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4981 runtime=io.containerd.runc.v2\n" Dec 13 14:16:45.405593 kubelet[2937]: W1213 14:16:45.405514 2937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c7588d0_39bd_4534_8a9d_307ff118b3e3.slice/cri-containerd-7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492.scope WatchSource:0}: container "7ab7675ed28672fb201373fb121d741aa2efead6daf1ed6350c75539dbedf492" in namespace "k8s.io": not found Dec 13 14:16:45.662134 env[1825]: time="2024-12-13T14:16:45.660755032Z" level=info msg="CreateContainer within sandbox \"b2b576306392bcbbcf7a692d7e31ed82f3e9eaa01ef0daf0663ed2422666c3b7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:16:45.716214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1641053981.mount: Deactivated successfully. Dec 13 14:16:45.727752 env[1825]: time="2024-12-13T14:16:45.727636576Z" level=info msg="CreateContainer within sandbox \"b2b576306392bcbbcf7a692d7e31ed82f3e9eaa01ef0daf0663ed2422666c3b7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"569e5ccd7d468c6a58fb0a37e7cfd181a5f0660c81add92d22f47de0ede5cf9e\"" Dec 13 14:16:45.728818 env[1825]: time="2024-12-13T14:16:45.728756223Z" level=info msg="StartContainer for \"569e5ccd7d468c6a58fb0a37e7cfd181a5f0660c81add92d22f47de0ede5cf9e\"" Dec 13 14:16:45.769688 systemd[1]: Started cri-containerd-569e5ccd7d468c6a58fb0a37e7cfd181a5f0660c81add92d22f47de0ede5cf9e.scope. Dec 13 14:16:45.854197 env[1825]: time="2024-12-13T14:16:45.854129139Z" level=info msg="StartContainer for \"569e5ccd7d468c6a58fb0a37e7cfd181a5f0660c81add92d22f47de0ede5cf9e\" returns successfully" Dec 13 14:16:45.862686 systemd[1]: cri-containerd-569e5ccd7d468c6a58fb0a37e7cfd181a5f0660c81add92d22f47de0ede5cf9e.scope: Deactivated successfully. Dec 13 14:16:45.907235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-569e5ccd7d468c6a58fb0a37e7cfd181a5f0660c81add92d22f47de0ede5cf9e-rootfs.mount: Deactivated successfully. Dec 13 14:16:45.923044 env[1825]: time="2024-12-13T14:16:45.922875826Z" level=info msg="shim disconnected" id=569e5ccd7d468c6a58fb0a37e7cfd181a5f0660c81add92d22f47de0ede5cf9e Dec 13 14:16:45.923044 env[1825]: time="2024-12-13T14:16:45.922947178Z" level=warning msg="cleaning up after shim disconnected" id=569e5ccd7d468c6a58fb0a37e7cfd181a5f0660c81add92d22f47de0ede5cf9e namespace=k8s.io Dec 13 14:16:45.923044 env[1825]: time="2024-12-13T14:16:45.922970435Z" level=info msg="cleaning up dead shim" Dec 13 14:16:45.937932 env[1825]: time="2024-12-13T14:16:45.937866158Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5037 runtime=io.containerd.runc.v2\n" Dec 13 14:16:46.407675 kubelet[2937]: E1213 14:16:46.407623 2937 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:16:46.661924 env[1825]: time="2024-12-13T14:16:46.661771797Z" level=info msg="CreateContainer within sandbox \"b2b576306392bcbbcf7a692d7e31ed82f3e9eaa01ef0daf0663ed2422666c3b7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:16:46.687168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1180353748.mount: Deactivated successfully. Dec 13 14:16:46.696616 env[1825]: time="2024-12-13T14:16:46.696457345Z" level=info msg="CreateContainer within sandbox \"b2b576306392bcbbcf7a692d7e31ed82f3e9eaa01ef0daf0663ed2422666c3b7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"24f92e7a4957743b8f7327943c47b48f80a53173117085f1bf3df1eb2ce9a243\"" Dec 13 14:16:46.697669 env[1825]: time="2024-12-13T14:16:46.697622568Z" level=info msg="StartContainer for \"24f92e7a4957743b8f7327943c47b48f80a53173117085f1bf3df1eb2ce9a243\"" Dec 13 14:16:46.701423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1235387728.mount: Deactivated successfully. Dec 13 14:16:46.737536 systemd[1]: Started cri-containerd-24f92e7a4957743b8f7327943c47b48f80a53173117085f1bf3df1eb2ce9a243.scope. Dec 13 14:16:46.803818 env[1825]: time="2024-12-13T14:16:46.803740689Z" level=info msg="StartContainer for \"24f92e7a4957743b8f7327943c47b48f80a53173117085f1bf3df1eb2ce9a243\" returns successfully" Dec 13 14:16:46.811726 systemd[1]: cri-containerd-24f92e7a4957743b8f7327943c47b48f80a53173117085f1bf3df1eb2ce9a243.scope: Deactivated successfully. Dec 13 14:16:46.877766 env[1825]: time="2024-12-13T14:16:46.877700510Z" level=info msg="shim disconnected" id=24f92e7a4957743b8f7327943c47b48f80a53173117085f1bf3df1eb2ce9a243 Dec 13 14:16:46.878107 env[1825]: time="2024-12-13T14:16:46.878074422Z" level=warning msg="cleaning up after shim disconnected" id=24f92e7a4957743b8f7327943c47b48f80a53173117085f1bf3df1eb2ce9a243 namespace=k8s.io Dec 13 14:16:46.878229 env[1825]: time="2024-12-13T14:16:46.878200963Z" level=info msg="cleaning up dead shim" Dec 13 14:16:46.897239 env[1825]: time="2024-12-13T14:16:46.897181369Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5093 runtime=io.containerd.runc.v2\n" Dec 13 14:16:47.661341 env[1825]: time="2024-12-13T14:16:47.661259006Z" level=info msg="CreateContainer within sandbox \"b2b576306392bcbbcf7a692d7e31ed82f3e9eaa01ef0daf0663ed2422666c3b7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:16:47.710885 env[1825]: time="2024-12-13T14:16:47.710805125Z" level=info msg="CreateContainer within sandbox \"b2b576306392bcbbcf7a692d7e31ed82f3e9eaa01ef0daf0663ed2422666c3b7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"055fe83edca62c742ba2def6f84f80f1b29ce62ae4db5c147f8c5df52651680d\"" Dec 13 14:16:47.713264 env[1825]: time="2024-12-13T14:16:47.713208125Z" level=info msg="StartContainer for \"055fe83edca62c742ba2def6f84f80f1b29ce62ae4db5c147f8c5df52651680d\"" Dec 13 14:16:47.759587 systemd[1]: Started cri-containerd-055fe83edca62c742ba2def6f84f80f1b29ce62ae4db5c147f8c5df52651680d.scope. Dec 13 14:16:47.814831 env[1825]: time="2024-12-13T14:16:47.814745711Z" level=info msg="StartContainer for \"055fe83edca62c742ba2def6f84f80f1b29ce62ae4db5c147f8c5df52651680d\" returns successfully" Dec 13 14:16:48.525230 kubelet[2937]: W1213 14:16:48.525176 2937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce5e1f36_66af_40cd_864f_99dd3a3b2c6f.slice/cri-containerd-0db5536d7ed0247a02c300e9f9cfe36881201e694301f35a0b55050b10a32b0e.scope WatchSource:0}: task 0db5536d7ed0247a02c300e9f9cfe36881201e694301f35a0b55050b10a32b0e not found: not found Dec 13 14:16:48.652724 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Dec 13 14:16:48.699272 kubelet[2937]: I1213 14:16:48.699201 2937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-xwmxk" podStartSLOduration=5.699142142 podStartE2EDuration="5.699142142s" podCreationTimestamp="2024-12-13 14:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:16:48.698780267 +0000 UTC m=+132.785292524" watchObservedRunningTime="2024-12-13 14:16:48.699142142 +0000 UTC m=+132.785654399" Dec 13 14:16:48.863907 kubelet[2937]: I1213 14:16:48.863332 2937 setters.go:568] "Node became not ready" node="ip-172-31-24-251" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:16:48Z","lastTransitionTime":"2024-12-13T14:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:16:49.044420 systemd[1]: run-containerd-runc-k8s.io-055fe83edca62c742ba2def6f84f80f1b29ce62ae4db5c147f8c5df52651680d-runc.hkemfj.mount: Deactivated successfully. Dec 13 14:16:51.293241 systemd[1]: run-containerd-runc-k8s.io-055fe83edca62c742ba2def6f84f80f1b29ce62ae4db5c147f8c5df52651680d-runc.clGojX.mount: Deactivated successfully. Dec 13 14:16:51.638737 kubelet[2937]: W1213 14:16:51.638362 2937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce5e1f36_66af_40cd_864f_99dd3a3b2c6f.slice/cri-containerd-32e8adaf77dd2c31dc2bb2068e8aad88311a7f2fec75c686bc5a603d70e0da94.scope WatchSource:0}: task 32e8adaf77dd2c31dc2bb2068e8aad88311a7f2fec75c686bc5a603d70e0da94 not found: not found Dec 13 14:16:52.809421 (udev-worker)[5656]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:16:52.814313 (udev-worker)[5657]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:16:52.817502 systemd-networkd[1540]: lxc_health: Link UP Dec 13 14:16:52.835706 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:16:52.836014 systemd-networkd[1540]: lxc_health: Gained carrier Dec 13 14:16:53.896833 systemd-networkd[1540]: lxc_health: Gained IPv6LL Dec 13 14:16:54.750778 kubelet[2937]: W1213 14:16:54.750698 2937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce5e1f36_66af_40cd_864f_99dd3a3b2c6f.slice/cri-containerd-569e5ccd7d468c6a58fb0a37e7cfd181a5f0660c81add92d22f47de0ede5cf9e.scope WatchSource:0}: task 569e5ccd7d468c6a58fb0a37e7cfd181a5f0660c81add92d22f47de0ede5cf9e not found: not found Dec 13 14:16:56.005182 systemd[1]: run-containerd-runc-k8s.io-055fe83edca62c742ba2def6f84f80f1b29ce62ae4db5c147f8c5df52651680d-runc.EgjzMM.mount: Deactivated successfully. Dec 13 14:16:57.859348 kubelet[2937]: W1213 14:16:57.859290 2937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce5e1f36_66af_40cd_864f_99dd3a3b2c6f.slice/cri-containerd-24f92e7a4957743b8f7327943c47b48f80a53173117085f1bf3df1eb2ce9a243.scope WatchSource:0}: task 24f92e7a4957743b8f7327943c47b48f80a53173117085f1bf3df1eb2ce9a243 not found: not found Dec 13 14:16:58.281349 systemd[1]: run-containerd-runc-k8s.io-055fe83edca62c742ba2def6f84f80f1b29ce62ae4db5c147f8c5df52651680d-runc.znukVq.mount: Deactivated successfully. Dec 13 14:16:58.419954 sshd[4757]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:58.426082 systemd[1]: sshd@27-172.31.24.251:22-139.178.89.65:55368.service: Deactivated successfully. Dec 13 14:16:58.427381 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 14:16:58.428067 systemd-logind[1810]: Session 28 logged out. Waiting for processes to exit. Dec 13 14:16:58.430489 systemd-logind[1810]: Removed session 28. Dec 13 14:17:11.994300 systemd[1]: cri-containerd-1ff61cdbbcf11678c3630901b9a0f92652d2114b7b4f20f8260b5adc90f0c382.scope: Deactivated successfully. Dec 13 14:17:11.994891 systemd[1]: cri-containerd-1ff61cdbbcf11678c3630901b9a0f92652d2114b7b4f20f8260b5adc90f0c382.scope: Consumed 3.672s CPU time. Dec 13 14:17:12.032260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ff61cdbbcf11678c3630901b9a0f92652d2114b7b4f20f8260b5adc90f0c382-rootfs.mount: Deactivated successfully. Dec 13 14:17:12.044674 env[1825]: time="2024-12-13T14:17:12.044602450Z" level=info msg="shim disconnected" id=1ff61cdbbcf11678c3630901b9a0f92652d2114b7b4f20f8260b5adc90f0c382 Dec 13 14:17:12.045535 env[1825]: time="2024-12-13T14:17:12.045497525Z" level=warning msg="cleaning up after shim disconnected" id=1ff61cdbbcf11678c3630901b9a0f92652d2114b7b4f20f8260b5adc90f0c382 namespace=k8s.io Dec 13 14:17:12.045712 env[1825]: time="2024-12-13T14:17:12.045682123Z" level=info msg="cleaning up dead shim" Dec 13 14:17:12.059577 env[1825]: time="2024-12-13T14:17:12.059500821Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5771 runtime=io.containerd.runc.v2\n" Dec 13 14:17:12.728522 kubelet[2937]: I1213 14:17:12.728484 2937 scope.go:117] "RemoveContainer" containerID="1ff61cdbbcf11678c3630901b9a0f92652d2114b7b4f20f8260b5adc90f0c382" Dec 13 14:17:12.733113 env[1825]: time="2024-12-13T14:17:12.733059493Z" level=info msg="CreateContainer within sandbox \"1a7fcab066694a82c1f1166d086575559e1cd4811b55e07f26a93daa7c2e5751\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 14:17:12.759323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1187045277.mount: Deactivated successfully. Dec 13 14:17:12.772288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2180242634.mount: Deactivated successfully. Dec 13 14:17:12.782808 env[1825]: time="2024-12-13T14:17:12.782747209Z" level=info msg="CreateContainer within sandbox \"1a7fcab066694a82c1f1166d086575559e1cd4811b55e07f26a93daa7c2e5751\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c40a90de06767c70f2c6fd891fa37056d1c80a4f95c2a66f37aa29b8a3b8f004\"" Dec 13 14:17:12.783767 env[1825]: time="2024-12-13T14:17:12.783717584Z" level=info msg="StartContainer for \"c40a90de06767c70f2c6fd891fa37056d1c80a4f95c2a66f37aa29b8a3b8f004\"" Dec 13 14:17:12.814319 systemd[1]: Started cri-containerd-c40a90de06767c70f2c6fd891fa37056d1c80a4f95c2a66f37aa29b8a3b8f004.scope. Dec 13 14:17:12.902591 env[1825]: time="2024-12-13T14:17:12.902501803Z" level=info msg="StartContainer for \"c40a90de06767c70f2c6fd891fa37056d1c80a4f95c2a66f37aa29b8a3b8f004\" returns successfully" Dec 13 14:17:16.995007 systemd[1]: cri-containerd-2f8eb0e33303fd7ee5f06683c37d899c4d5096b75af8f095aa9c63efb3aa7bb0.scope: Deactivated successfully. Dec 13 14:17:16.995589 systemd[1]: cri-containerd-2f8eb0e33303fd7ee5f06683c37d899c4d5096b75af8f095aa9c63efb3aa7bb0.scope: Consumed 3.583s CPU time. Dec 13 14:17:17.033039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f8eb0e33303fd7ee5f06683c37d899c4d5096b75af8f095aa9c63efb3aa7bb0-rootfs.mount: Deactivated successfully. Dec 13 14:17:17.051419 env[1825]: time="2024-12-13T14:17:17.051348430Z" level=info msg="shim disconnected" id=2f8eb0e33303fd7ee5f06683c37d899c4d5096b75af8f095aa9c63efb3aa7bb0 Dec 13 14:17:17.052289 env[1825]: time="2024-12-13T14:17:17.051417622Z" level=warning msg="cleaning up after shim disconnected" id=2f8eb0e33303fd7ee5f06683c37d899c4d5096b75af8f095aa9c63efb3aa7bb0 namespace=k8s.io Dec 13 14:17:17.052289 env[1825]: time="2024-12-13T14:17:17.051441227Z" level=info msg="cleaning up dead shim" Dec 13 14:17:17.065272 env[1825]: time="2024-12-13T14:17:17.065171724Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5832 runtime=io.containerd.runc.v2\n" Dec 13 14:17:17.744721 kubelet[2937]: I1213 14:17:17.744666 2937 scope.go:117] "RemoveContainer" containerID="2f8eb0e33303fd7ee5f06683c37d899c4d5096b75af8f095aa9c63efb3aa7bb0" Dec 13 14:17:17.748287 env[1825]: time="2024-12-13T14:17:17.748227442Z" level=info msg="CreateContainer within sandbox \"55c22b3b9c153fbf51c2dd7dad7da43de521d3a46e494a920728c4a02bf85690\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 14:17:17.775936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1258141219.mount: Deactivated successfully. Dec 13 14:17:17.789729 env[1825]: time="2024-12-13T14:17:17.789649108Z" level=info msg="CreateContainer within sandbox \"55c22b3b9c153fbf51c2dd7dad7da43de521d3a46e494a920728c4a02bf85690\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"23723e4d5814c18156ac906a7f7ef99d5433b3b6a643a9a1476cb12236916f15\"" Dec 13 14:17:17.790536 env[1825]: time="2024-12-13T14:17:17.790495319Z" level=info msg="StartContainer for \"23723e4d5814c18156ac906a7f7ef99d5433b3b6a643a9a1476cb12236916f15\"" Dec 13 14:17:17.832428 systemd[1]: Started cri-containerd-23723e4d5814c18156ac906a7f7ef99d5433b3b6a643a9a1476cb12236916f15.scope. Dec 13 14:17:17.920226 env[1825]: time="2024-12-13T14:17:17.920165953Z" level=info msg="StartContainer for \"23723e4d5814c18156ac906a7f7ef99d5433b3b6a643a9a1476cb12236916f15\" returns successfully" Dec 13 14:17:19.010642 kubelet[2937]: E1213 14:17:19.010449 2937 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-251?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 14:17:29.011469 kubelet[2937]: E1213 14:17:29.011425 2937 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-251?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"