Dec 13 14:13:47.946543 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 14:13:47.946578 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 14:13:47.946601 kernel: efi: EFI v2.70 by EDK II Dec 13 14:13:47.946616 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7171cf98 Dec 13 14:13:47.946630 kernel: ACPI: Early table checksum verification disabled Dec 13 14:13:47.946643 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 14:13:47.946659 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 14:13:47.946673 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 14:13:47.946687 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 14:13:47.946721 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 14:13:47.946742 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 14:13:47.946756 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 14:13:47.946771 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 14:13:47.946785 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 14:13:47.946802 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 14:13:47.946821 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 14:13:47.946836 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 14:13:47.946851 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 14:13:47.946865 kernel: printk: bootconsole [uart0] enabled Dec 13 14:13:47.946879 kernel: NUMA: Failed to initialise from firmware Dec 13 14:13:47.946895 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 14:13:47.946909 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Dec 13 14:13:47.946924 kernel: Zone ranges: Dec 13 14:13:47.946938 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 14:13:47.946953 kernel: DMA32 empty Dec 13 14:13:47.946967 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 14:13:47.946986 kernel: Movable zone start for each node Dec 13 14:13:47.947001 kernel: Early memory node ranges Dec 13 14:13:47.947015 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 14:13:47.947029 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 14:13:47.947044 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 14:13:47.947058 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 14:13:47.947073 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 14:13:47.947087 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 14:13:47.947102 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 14:13:47.947116 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 14:13:47.947130 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 14:13:47.947145 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 14:13:47.947163 kernel: psci: probing for conduit method from ACPI. Dec 13 14:13:47.947178 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 14:13:47.947199 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:13:47.947215 kernel: psci: Trusted OS migration not required Dec 13 14:13:47.947230 kernel: psci: SMC Calling Convention v1.1 Dec 13 14:13:47.947249 kernel: ACPI: SRAT not present Dec 13 14:13:47.947265 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 14:13:47.947281 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 14:13:47.947296 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 14:13:47.947312 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:13:47.947327 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:13:47.947342 kernel: CPU features: detected: Spectre-v2 Dec 13 14:13:47.947357 kernel: CPU features: detected: Spectre-v3a Dec 13 14:13:47.947373 kernel: CPU features: detected: Spectre-BHB Dec 13 14:13:47.947388 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:13:47.947403 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:13:47.947422 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 14:13:47.947438 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 14:13:47.947453 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 14:13:47.947468 kernel: Policy zone: Normal Dec 13 14:13:47.947486 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:13:47.947503 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:13:47.947518 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:13:47.947534 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:13:47.947549 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:13:47.947564 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 14:13:47.947585 kernel: Memory: 3824524K/4030464K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 205940K reserved, 0K cma-reserved) Dec 13 14:13:47.947601 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:13:47.947616 kernel: trace event string verifier disabled Dec 13 14:13:47.947631 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:13:47.947647 kernel: rcu: RCU event tracing is enabled. Dec 13 14:13:47.947663 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:13:47.947679 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:13:47.947709 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:13:47.947729 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:13:47.947745 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:13:47.947760 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:13:47.947775 kernel: GICv3: 96 SPIs implemented Dec 13 14:13:47.947795 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:13:47.947811 kernel: GICv3: Distributor has no Range Selector support Dec 13 14:13:47.947826 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:13:47.947841 kernel: GICv3: 16 PPIs implemented Dec 13 14:13:47.947857 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 14:13:47.947872 kernel: ACPI: SRAT not present Dec 13 14:13:47.947887 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 14:13:47.947902 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:13:47.947918 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Dec 13 14:13:47.947933 kernel: GICv3: using LPI property table @0x00000004000b0000 Dec 13 14:13:47.947948 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 14:13:47.947967 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Dec 13 14:13:47.947983 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 14:13:47.947999 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 14:13:47.948014 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 14:13:47.948030 kernel: Console: colour dummy device 80x25 Dec 13 14:13:47.948045 kernel: printk: console [tty1] enabled Dec 13 14:13:47.948061 kernel: ACPI: Core revision 20210730 Dec 13 14:13:47.948077 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 14:13:47.948093 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:13:47.948109 kernel: LSM: Security Framework initializing Dec 13 14:13:47.948128 kernel: SELinux: Initializing. Dec 13 14:13:47.948145 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:13:47.948160 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:13:47.948176 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:13:47.948191 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 14:13:47.948207 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 14:13:47.948222 kernel: Remapping and enabling EFI services. Dec 13 14:13:47.948238 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:13:47.948253 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:13:47.948269 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 14:13:47.948289 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Dec 13 14:13:47.948304 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 14:13:47.948320 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:13:47.948336 kernel: SMP: Total of 2 processors activated. Dec 13 14:13:47.948351 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:13:47.948367 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 14:13:47.948382 kernel: CPU features: detected: CRC32 instructions Dec 13 14:13:47.948398 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:13:47.948413 kernel: alternatives: patching kernel code Dec 13 14:13:47.948433 kernel: devtmpfs: initialized Dec 13 14:13:47.948448 kernel: KASLR disabled due to lack of seed Dec 13 14:13:47.948474 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:13:47.948495 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:13:47.948511 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:13:47.948527 kernel: SMBIOS 3.0.0 present. Dec 13 14:13:47.948562 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 14:13:47.948579 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:13:47.948596 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:13:47.948613 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:13:47.948629 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:13:47.948651 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:13:47.948667 kernel: audit: type=2000 audit(0.249:1): state=initialized audit_enabled=0 res=1 Dec 13 14:13:47.948683 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:13:47.948716 kernel: cpuidle: using governor menu Dec 13 14:13:47.948735 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:13:47.948756 kernel: ASID allocator initialised with 32768 entries Dec 13 14:13:47.948773 kernel: ACPI: bus type PCI registered Dec 13 14:13:47.948790 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:13:47.948806 kernel: Serial: AMBA PL011 UART driver Dec 13 14:13:47.948822 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:13:47.948839 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:13:47.948856 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:13:47.948872 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:13:47.948888 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:13:47.948908 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:13:47.948925 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:13:47.948941 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:13:47.948957 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:13:47.948973 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:13:47.948989 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:13:47.949005 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:13:47.949021 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:13:47.949038 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:13:47.949058 kernel: ACPI: Interpreter enabled Dec 13 14:13:47.949074 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:13:47.949090 kernel: ACPI: MCFG table detected, 1 entries Dec 13 14:13:47.949106 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 14:13:47.949379 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:13:47.949569 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:13:47.949780 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:13:47.949967 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 14:13:47.950155 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 14:13:47.950178 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 14:13:47.950195 kernel: acpiphp: Slot [1] registered Dec 13 14:13:47.950211 kernel: acpiphp: Slot [2] registered Dec 13 14:13:47.950228 kernel: acpiphp: Slot [3] registered Dec 13 14:13:47.950244 kernel: acpiphp: Slot [4] registered Dec 13 14:13:47.950260 kernel: acpiphp: Slot [5] registered Dec 13 14:13:47.950276 kernel: acpiphp: Slot [6] registered Dec 13 14:13:47.950292 kernel: acpiphp: Slot [7] registered Dec 13 14:13:47.950313 kernel: acpiphp: Slot [8] registered Dec 13 14:13:47.950329 kernel: acpiphp: Slot [9] registered Dec 13 14:13:47.950345 kernel: acpiphp: Slot [10] registered Dec 13 14:13:47.950361 kernel: acpiphp: Slot [11] registered Dec 13 14:13:47.950378 kernel: acpiphp: Slot [12] registered Dec 13 14:13:47.950394 kernel: acpiphp: Slot [13] registered Dec 13 14:13:47.950410 kernel: acpiphp: Slot [14] registered Dec 13 14:13:47.950426 kernel: acpiphp: Slot [15] registered Dec 13 14:13:47.950442 kernel: acpiphp: Slot [16] registered Dec 13 14:13:47.950462 kernel: acpiphp: Slot [17] registered Dec 13 14:13:47.950478 kernel: acpiphp: Slot [18] registered Dec 13 14:13:47.950495 kernel: acpiphp: Slot [19] registered Dec 13 14:13:47.950511 kernel: acpiphp: Slot [20] registered Dec 13 14:13:47.950527 kernel: acpiphp: Slot [21] registered Dec 13 14:13:47.950543 kernel: acpiphp: Slot [22] registered Dec 13 14:13:47.950559 kernel: acpiphp: Slot [23] registered Dec 13 14:13:47.950575 kernel: acpiphp: Slot [24] registered Dec 13 14:13:47.950591 kernel: acpiphp: Slot [25] registered Dec 13 14:13:47.950607 kernel: acpiphp: Slot [26] registered Dec 13 14:13:47.950627 kernel: acpiphp: Slot [27] registered Dec 13 14:13:47.950643 kernel: acpiphp: Slot [28] registered Dec 13 14:13:47.950659 kernel: acpiphp: Slot [29] registered Dec 13 14:13:47.950675 kernel: acpiphp: Slot [30] registered Dec 13 14:13:47.950691 kernel: acpiphp: Slot [31] registered Dec 13 14:13:47.950726 kernel: PCI host bridge to bus 0000:00 Dec 13 14:13:47.950916 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 14:13:47.951087 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 14:13:47.951259 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 14:13:47.951426 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 14:13:47.951657 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 14:13:47.963964 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 14:13:47.964178 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 14:13:47.964397 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 14:13:47.964621 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 14:13:47.964836 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:13:47.965041 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 14:13:47.965231 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 14:13:47.965423 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 14:13:47.965657 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 14:13:47.965916 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:13:47.966139 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 14:13:47.966359 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 14:13:47.966582 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 14:13:47.967916 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 14:13:47.968137 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 14:13:47.968323 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 14:13:47.968500 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 14:13:47.968754 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 14:13:47.968780 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 14:13:47.968798 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 14:13:47.968816 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 14:13:47.968832 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 14:13:47.968849 kernel: iommu: Default domain type: Translated Dec 13 14:13:47.968866 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:13:47.968882 kernel: vgaarb: loaded Dec 13 14:13:47.968899 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:13:47.968921 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:13:47.968938 kernel: PTP clock support registered Dec 13 14:13:47.968955 kernel: Registered efivars operations Dec 13 14:13:47.968971 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:13:47.968987 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:13:47.969004 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:13:47.969020 kernel: pnp: PnP ACPI init Dec 13 14:13:47.969217 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 14:13:47.969246 kernel: pnp: PnP ACPI: found 1 devices Dec 13 14:13:47.969263 kernel: NET: Registered PF_INET protocol family Dec 13 14:13:47.969280 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:13:47.969297 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:13:47.969313 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:13:47.969330 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:13:47.969347 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:13:47.969363 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:13:47.969395 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:13:47.969418 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:13:47.969435 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:13:47.969451 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:13:47.969468 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 14:13:47.969484 kernel: kvm [1]: HYP mode not available Dec 13 14:13:47.969501 kernel: Initialise system trusted keyrings Dec 13 14:13:47.969517 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:13:47.969534 kernel: Key type asymmetric registered Dec 13 14:13:47.969550 kernel: Asymmetric key parser 'x509' registered Dec 13 14:13:47.969570 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:13:47.969587 kernel: io scheduler mq-deadline registered Dec 13 14:13:47.969604 kernel: io scheduler kyber registered Dec 13 14:13:47.969620 kernel: io scheduler bfq registered Dec 13 14:13:47.969836 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 14:13:47.969863 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 14:13:47.969913 kernel: ACPI: button: Power Button [PWRB] Dec 13 14:13:47.969956 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 14:13:47.969980 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 14:13:47.969997 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:13:47.970015 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 14:13:47.971791 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 14:13:47.971822 kernel: printk: console [ttyS0] disabled Dec 13 14:13:47.971841 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 14:13:47.971858 kernel: printk: console [ttyS0] enabled Dec 13 14:13:47.971875 kernel: printk: bootconsole [uart0] disabled Dec 13 14:13:47.971892 kernel: thunder_xcv, ver 1.0 Dec 13 14:13:47.971908 kernel: thunder_bgx, ver 1.0 Dec 13 14:13:47.971933 kernel: nicpf, ver 1.0 Dec 13 14:13:47.971949 kernel: nicvf, ver 1.0 Dec 13 14:13:47.972164 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:13:47.974217 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:13:47 UTC (1734099227) Dec 13 14:13:47.974245 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:13:47.974263 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:13:47.974280 kernel: Segment Routing with IPv6 Dec 13 14:13:47.974296 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:13:47.974320 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:13:47.974336 kernel: Key type dns_resolver registered Dec 13 14:13:47.974353 kernel: registered taskstats version 1 Dec 13 14:13:47.974369 kernel: Loading compiled-in X.509 certificates Dec 13 14:13:47.974386 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 14:13:47.974402 kernel: Key type .fscrypt registered Dec 13 14:13:47.974418 kernel: Key type fscrypt-provisioning registered Dec 13 14:13:47.974434 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:13:47.974451 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:13:47.974471 kernel: ima: No architecture policies found Dec 13 14:13:47.974487 kernel: clk: Disabling unused clocks Dec 13 14:13:47.974503 kernel: Freeing unused kernel memory: 36416K Dec 13 14:13:47.974520 kernel: Run /init as init process Dec 13 14:13:47.974536 kernel: with arguments: Dec 13 14:13:47.974552 kernel: /init Dec 13 14:13:47.974568 kernel: with environment: Dec 13 14:13:47.974584 kernel: HOME=/ Dec 13 14:13:47.974600 kernel: TERM=linux Dec 13 14:13:47.974620 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:13:47.974641 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:13:47.974662 systemd[1]: Detected virtualization amazon. Dec 13 14:13:47.974680 systemd[1]: Detected architecture arm64. Dec 13 14:13:47.974714 systemd[1]: Running in initrd. Dec 13 14:13:47.974735 systemd[1]: No hostname configured, using default hostname. Dec 13 14:13:47.974753 systemd[1]: Hostname set to . Dec 13 14:13:47.974777 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:13:47.974795 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:13:47.974813 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:13:47.974830 systemd[1]: Reached target cryptsetup.target. Dec 13 14:13:47.974849 systemd[1]: Reached target paths.target. Dec 13 14:13:47.974866 systemd[1]: Reached target slices.target. Dec 13 14:13:47.974884 systemd[1]: Reached target swap.target. Dec 13 14:13:47.974902 systemd[1]: Reached target timers.target. Dec 13 14:13:47.974924 systemd[1]: Listening on iscsid.socket. Dec 13 14:13:47.974942 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:13:47.974960 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:13:47.974978 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:13:47.974996 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:13:47.975013 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:13:47.975031 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:13:47.975049 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:13:47.975071 systemd[1]: Reached target sockets.target. Dec 13 14:13:47.975089 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:13:47.975107 systemd[1]: Finished network-cleanup.service. Dec 13 14:13:47.975125 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:13:47.975143 systemd[1]: Starting systemd-journald.service... Dec 13 14:13:47.975160 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:13:47.975178 systemd[1]: Starting systemd-resolved.service... Dec 13 14:13:47.975196 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:13:47.975214 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:13:47.975236 kernel: audit: type=1130 audit(1734099227.943:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.975255 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:13:47.975273 kernel: audit: type=1130 audit(1734099227.957:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.975290 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:13:47.975308 kernel: audit: type=1130 audit(1734099227.969:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.975328 systemd-journald[309]: Journal started Dec 13 14:13:47.975415 systemd-journald[309]: Runtime Journal (/run/log/journal/ec2d8f980b5d9c7852e63b7c8f0ccec3) is 8.0M, max 75.4M, 67.4M free. Dec 13 14:13:47.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.943224 systemd-modules-load[310]: Inserted module 'overlay' Dec 13 14:13:47.989823 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:13:48.003207 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:13:48.003277 systemd[1]: Started systemd-journald.service. Dec 13 14:13:48.003304 kernel: audit: type=1130 audit(1734099228.001:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.031810 systemd-resolved[311]: Positive Trust Anchors: Dec 13 14:13:48.031837 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:13:48.031891 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:13:48.049303 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:13:48.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.064732 kernel: audit: type=1130 audit(1734099228.049:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.064794 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:13:48.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.065657 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:13:48.068332 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:13:48.083927 systemd-modules-load[310]: Inserted module 'br_netfilter' Dec 13 14:13:48.085846 kernel: audit: type=1130 audit(1734099228.068:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.085884 kernel: Bridge firewalling registered Dec 13 14:13:48.110229 dracut-cmdline[325]: dracut-dracut-053 Dec 13 14:13:48.111908 kernel: SCSI subsystem initialized Dec 13 14:13:48.115552 dracut-cmdline[325]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:13:48.147556 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:13:48.147624 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:13:48.151725 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:13:48.156243 systemd-modules-load[310]: Inserted module 'dm_multipath' Dec 13 14:13:48.160044 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:13:48.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.170516 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:13:48.176496 kernel: audit: type=1130 audit(1734099228.160:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.194104 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:13:48.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.207731 kernel: audit: type=1130 audit(1734099228.194:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.278733 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:13:48.298737 kernel: iscsi: registered transport (tcp) Dec 13 14:13:48.325514 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:13:48.325596 kernel: QLogic iSCSI HBA Driver Dec 13 14:13:48.496504 systemd-resolved[311]: Defaulting to hostname 'linux'. Dec 13 14:13:48.498235 kernel: random: crng init done Dec 13 14:13:48.498809 systemd[1]: Started systemd-resolved.service. Dec 13 14:13:48.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.500029 systemd[1]: Reached target nss-lookup.target. Dec 13 14:13:48.511214 kernel: audit: type=1130 audit(1734099228.497:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.525761 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:13:48.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.529469 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:13:48.594754 kernel: raid6: neonx8 gen() 6392 MB/s Dec 13 14:13:48.612728 kernel: raid6: neonx8 xor() 4790 MB/s Dec 13 14:13:48.630725 kernel: raid6: neonx4 gen() 6480 MB/s Dec 13 14:13:48.648725 kernel: raid6: neonx4 xor() 5006 MB/s Dec 13 14:13:48.666725 kernel: raid6: neonx2 gen() 5764 MB/s Dec 13 14:13:48.684725 kernel: raid6: neonx2 xor() 4586 MB/s Dec 13 14:13:48.702725 kernel: raid6: neonx1 gen() 4439 MB/s Dec 13 14:13:48.720725 kernel: raid6: neonx1 xor() 3699 MB/s Dec 13 14:13:48.738725 kernel: raid6: int64x8 gen() 3421 MB/s Dec 13 14:13:48.756725 kernel: raid6: int64x8 xor() 2095 MB/s Dec 13 14:13:48.774725 kernel: raid6: int64x4 gen() 3792 MB/s Dec 13 14:13:48.792725 kernel: raid6: int64x4 xor() 2199 MB/s Dec 13 14:13:48.810726 kernel: raid6: int64x2 gen() 3591 MB/s Dec 13 14:13:48.828725 kernel: raid6: int64x2 xor() 1950 MB/s Dec 13 14:13:48.846726 kernel: raid6: int64x1 gen() 2762 MB/s Dec 13 14:13:48.865878 kernel: raid6: int64x1 xor() 1450 MB/s Dec 13 14:13:48.865907 kernel: raid6: using algorithm neonx4 gen() 6480 MB/s Dec 13 14:13:48.865931 kernel: raid6: .... xor() 5006 MB/s, rmw enabled Dec 13 14:13:48.867480 kernel: raid6: using neon recovery algorithm Dec 13 14:13:48.887030 kernel: xor: measuring software checksum speed Dec 13 14:13:48.887092 kernel: 8regs : 9111 MB/sec Dec 13 14:13:48.888762 kernel: 32regs : 11105 MB/sec Dec 13 14:13:48.890547 kernel: arm64_neon : 9184 MB/sec Dec 13 14:13:48.890577 kernel: xor: using function: 32regs (11105 MB/sec) Dec 13 14:13:48.981741 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 14:13:48.999286 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:13:49.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:49.001000 audit: BPF prog-id=7 op=LOAD Dec 13 14:13:49.001000 audit: BPF prog-id=8 op=LOAD Dec 13 14:13:49.003655 systemd[1]: Starting systemd-udevd.service... Dec 13 14:13:49.031955 systemd-udevd[508]: Using default interface naming scheme 'v252'. Dec 13 14:13:49.042730 systemd[1]: Started systemd-udevd.service. Dec 13 14:13:49.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:49.052858 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:13:49.079888 dracut-pre-trigger[521]: rd.md=0: removing MD RAID activation Dec 13 14:13:49.138911 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:13:49.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:49.143309 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:13:49.243081 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:13:49.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:49.353723 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 14:13:49.353788 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 14:13:49.378886 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 14:13:49.379117 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 14:13:49.379327 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 14:13:49.379353 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:1e:70:d1:0e:c5 Dec 13 14:13:49.376288 (udev-worker)[570]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:13:49.392734 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 14:13:49.400738 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 14:13:49.408733 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:13:49.408771 kernel: GPT:9289727 != 16777215 Dec 13 14:13:49.408795 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:13:49.410688 kernel: GPT:9289727 != 16777215 Dec 13 14:13:49.411912 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:13:49.413605 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:49.494744 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (559) Dec 13 14:13:49.525996 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:13:49.530080 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:13:49.573742 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:13:49.579084 systemd[1]: Starting disk-uuid.service... Dec 13 14:13:49.592950 disk-uuid[664]: Primary Header is updated. Dec 13 14:13:49.592950 disk-uuid[664]: Secondary Entries is updated. Dec 13 14:13:49.592950 disk-uuid[664]: Secondary Header is updated. Dec 13 14:13:49.618017 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:13:49.643244 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:13:50.613734 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:50.614048 disk-uuid[670]: The operation has completed successfully. Dec 13 14:13:50.780627 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:13:50.781222 systemd[1]: Finished disk-uuid.service. Dec 13 14:13:50.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:50.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:50.797187 systemd[1]: Starting verity-setup.service... Dec 13 14:13:50.831730 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:13:50.933083 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:13:50.938970 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:13:50.944042 systemd[1]: Finished verity-setup.service. Dec 13 14:13:50.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.037734 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:13:51.039272 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:13:51.040320 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:13:51.044386 systemd[1]: Starting ignition-setup.service... Dec 13 14:13:51.048575 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:13:51.082063 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:13:51.082132 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:13:51.082156 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:13:51.096744 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:13:51.114773 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:13:51.132407 systemd[1]: Finished ignition-setup.service. Dec 13 14:13:51.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.137231 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:13:51.195488 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:13:51.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.198000 audit: BPF prog-id=9 op=LOAD Dec 13 14:13:51.201641 systemd[1]: Starting systemd-networkd.service... Dec 13 14:13:51.252656 systemd-networkd[1099]: lo: Link UP Dec 13 14:13:51.252681 systemd-networkd[1099]: lo: Gained carrier Dec 13 14:13:51.255204 systemd-networkd[1099]: Enumeration completed Dec 13 14:13:51.257486 systemd-networkd[1099]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:13:51.261028 systemd[1]: Started systemd-networkd.service. Dec 13 14:13:51.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.265549 systemd-networkd[1099]: eth0: Link UP Dec 13 14:13:51.265552 systemd[1]: Reached target network.target. Dec 13 14:13:51.265570 systemd-networkd[1099]: eth0: Gained carrier Dec 13 14:13:51.285085 systemd[1]: Starting iscsiuio.service... Dec 13 14:13:51.294877 systemd-networkd[1099]: eth0: DHCPv4 address 172.31.27.214/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:13:51.299428 systemd[1]: Started iscsiuio.service. Dec 13 14:13:51.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.303875 systemd[1]: Starting iscsid.service... Dec 13 14:13:51.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.314813 systemd[1]: Started iscsid.service. Dec 13 14:13:51.319765 iscsid[1104]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:13:51.319765 iscsid[1104]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:13:51.319765 iscsid[1104]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:13:51.319765 iscsid[1104]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:13:51.319765 iscsid[1104]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:13:51.319765 iscsid[1104]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:13:51.319765 iscsid[1104]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:13:51.319034 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:13:51.363774 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:13:51.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.366202 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:13:51.368405 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:13:51.371302 systemd[1]: Reached target remote-fs.target. Dec 13 14:13:51.375919 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:13:51.395225 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:13:51.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.671127 ignition[1047]: Ignition 2.14.0 Dec 13 14:13:51.671159 ignition[1047]: Stage: fetch-offline Dec 13 14:13:51.671554 ignition[1047]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:51.671618 ignition[1047]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:51.693740 ignition[1047]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:51.694623 ignition[1047]: Ignition finished successfully Dec 13 14:13:51.699127 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:13:51.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.702102 systemd[1]: Starting ignition-fetch.service... Dec 13 14:13:51.717451 ignition[1123]: Ignition 2.14.0 Dec 13 14:13:51.719041 ignition[1123]: Stage: fetch Dec 13 14:13:51.720379 ignition[1123]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:51.720454 ignition[1123]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:51.733023 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:51.736232 ignition[1123]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:51.764194 ignition[1123]: INFO : PUT result: OK Dec 13 14:13:51.769211 ignition[1123]: DEBUG : parsed url from cmdline: "" Dec 13 14:13:51.769211 ignition[1123]: INFO : no config URL provided Dec 13 14:13:51.769211 ignition[1123]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:13:51.774669 ignition[1123]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 14:13:51.774669 ignition[1123]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:51.779360 ignition[1123]: INFO : PUT result: OK Dec 13 14:13:51.780890 ignition[1123]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 14:13:51.785718 ignition[1123]: INFO : GET result: OK Dec 13 14:13:51.785718 ignition[1123]: DEBUG : parsing config with SHA512: 12ab8b1386b2875d4837ed77bdf4770a8b45d8bdf401f9308db0829d1249d95277c59ad8307c130cee722a2e79d658b4e9c4a5dc562d5266f93dcc210eed5a3a Dec 13 14:13:51.797431 unknown[1123]: fetched base config from "system" Dec 13 14:13:51.798586 unknown[1123]: fetched base config from "system" Dec 13 14:13:51.799466 unknown[1123]: fetched user config from "aws" Dec 13 14:13:51.803063 ignition[1123]: fetch: fetch complete Dec 13 14:13:51.803207 ignition[1123]: fetch: fetch passed Dec 13 14:13:51.803294 ignition[1123]: Ignition finished successfully Dec 13 14:13:51.809965 systemd[1]: Finished ignition-fetch.service. Dec 13 14:13:51.815887 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 13 14:13:51.815948 kernel: audit: type=1130 audit(1734099231.809:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.814451 systemd[1]: Starting ignition-kargs.service... Dec 13 14:13:51.838948 ignition[1129]: Ignition 2.14.0 Dec 13 14:13:51.838977 ignition[1129]: Stage: kargs Dec 13 14:13:51.839275 ignition[1129]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:51.839332 ignition[1129]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:51.853880 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:51.856517 ignition[1129]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:51.859124 ignition[1129]: INFO : PUT result: OK Dec 13 14:13:51.864961 ignition[1129]: kargs: kargs passed Dec 13 14:13:51.865237 ignition[1129]: Ignition finished successfully Dec 13 14:13:51.869303 systemd[1]: Finished ignition-kargs.service. Dec 13 14:13:51.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.874378 systemd[1]: Starting ignition-disks.service... Dec 13 14:13:51.882180 kernel: audit: type=1130 audit(1734099231.868:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.890238 ignition[1135]: Ignition 2.14.0 Dec 13 14:13:51.890264 ignition[1135]: Stage: disks Dec 13 14:13:51.890566 ignition[1135]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:51.890619 ignition[1135]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:51.904072 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:51.906623 ignition[1135]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:51.909189 ignition[1135]: INFO : PUT result: OK Dec 13 14:13:51.914220 ignition[1135]: disks: disks passed Dec 13 14:13:51.914322 ignition[1135]: Ignition finished successfully Dec 13 14:13:51.919257 systemd[1]: Finished ignition-disks.service. Dec 13 14:13:51.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.922195 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:13:51.950137 kernel: audit: type=1130 audit(1734099231.920:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.930128 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:13:51.931705 systemd[1]: Reached target local-fs.target. Dec 13 14:13:51.933244 systemd[1]: Reached target sysinit.target. Dec 13 14:13:51.934646 systemd[1]: Reached target basic.target. Dec 13 14:13:51.937428 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:13:51.983317 systemd-fsck[1143]: ROOT: clean, 621/553520 files, 56020/553472 blocks Dec 13 14:13:51.989144 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:13:51.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.992887 systemd[1]: Mounting sysroot.mount... Dec 13 14:13:52.000552 kernel: audit: type=1130 audit(1734099231.989:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.022742 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:13:52.024620 systemd[1]: Mounted sysroot.mount. Dec 13 14:13:52.027560 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:13:52.038685 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:13:52.044815 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:13:52.046227 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:13:52.046293 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:13:52.059920 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:13:52.066536 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:13:52.071303 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:13:52.089258 initrd-setup-root[1165]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:13:52.099744 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1160) Dec 13 14:13:52.102675 initrd-setup-root[1173]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:13:52.109542 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:13:52.109580 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:13:52.109603 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:13:52.114500 initrd-setup-root[1197]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:13:52.124079 initrd-setup-root[1205]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:13:52.144723 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:13:52.156305 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:13:52.257154 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:13:52.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.261459 systemd[1]: Starting ignition-mount.service... Dec 13 14:13:52.275153 kernel: audit: type=1130 audit(1734099232.259:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.271724 systemd[1]: Starting sysroot-boot.service... Dec 13 14:13:52.282775 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:13:52.282943 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:13:52.306039 ignition[1225]: INFO : Ignition 2.14.0 Dec 13 14:13:52.306039 ignition[1225]: INFO : Stage: mount Dec 13 14:13:52.309886 ignition[1225]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:52.309886 ignition[1225]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:52.328997 ignition[1225]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:52.334043 ignition[1225]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:52.339530 ignition[1225]: INFO : PUT result: OK Dec 13 14:13:52.345717 ignition[1225]: INFO : mount: mount passed Dec 13 14:13:52.347224 ignition[1225]: INFO : Ignition finished successfully Dec 13 14:13:52.350943 systemd[1]: Finished sysroot-boot.service. Dec 13 14:13:52.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.354163 systemd[1]: Finished ignition-mount.service. Dec 13 14:13:52.362769 kernel: audit: type=1130 audit(1734099232.352:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.364128 systemd[1]: Starting ignition-files.service... Dec 13 14:13:52.373777 kernel: audit: type=1130 audit(1734099232.361:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.379095 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:13:52.403823 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1235) Dec 13 14:13:52.408874 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:13:52.408916 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:13:52.408942 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:13:52.424728 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:13:52.429886 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:13:52.448412 ignition[1254]: INFO : Ignition 2.14.0 Dec 13 14:13:52.448412 ignition[1254]: INFO : Stage: files Dec 13 14:13:52.452196 ignition[1254]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:52.452196 ignition[1254]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:52.465815 ignition[1254]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:52.468166 ignition[1254]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:52.471043 ignition[1254]: INFO : PUT result: OK Dec 13 14:13:52.477572 ignition[1254]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:13:52.482677 ignition[1254]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:13:52.485243 ignition[1254]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:13:52.494142 ignition[1254]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:13:52.497002 ignition[1254]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:13:52.501202 unknown[1254]: wrote ssh authorized keys file for user: core Dec 13 14:13:52.503297 ignition[1254]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:13:52.513954 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:13:52.517178 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:13:52.517178 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:13:52.517178 ignition[1254]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 14:13:52.644752 ignition[1254]: INFO : GET result: OK Dec 13 14:13:52.809119 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:13:52.813239 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:13:52.813239 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:13:52.813239 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:52.813239 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:52.813239 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:13:52.813239 ignition[1254]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:52.843647 ignition[1254]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem349998578" Dec 13 14:13:52.849825 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1259) Dec 13 14:13:52.849871 ignition[1254]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem349998578": device or resource busy Dec 13 14:13:52.849871 ignition[1254]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem349998578", trying btrfs: device or resource busy Dec 13 14:13:52.849871 ignition[1254]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem349998578" Dec 13 14:13:52.849871 ignition[1254]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem349998578" Dec 13 14:13:52.863289 ignition[1254]: INFO : op(3): [started] unmounting "/mnt/oem349998578" Dec 13 14:13:52.865867 ignition[1254]: INFO : op(3): [finished] unmounting "/mnt/oem349998578" Dec 13 14:13:52.865867 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:13:52.865867 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:13:52.865867 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:13:52.865867 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:13:52.865867 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:13:52.865867 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:13:52.865867 ignition[1254]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 14:13:53.027858 systemd-networkd[1099]: eth0: Gained IPv6LL Dec 13 14:13:53.198622 ignition[1254]: INFO : GET result: OK Dec 13 14:13:53.382526 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:13:53.386227 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:13:53.386227 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:13:53.386227 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:13:53.386227 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:13:53.386227 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:13:53.386227 ignition[1254]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:53.412751 ignition[1254]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4062058038" Dec 13 14:13:53.412751 ignition[1254]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4062058038": device or resource busy Dec 13 14:13:53.412751 ignition[1254]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4062058038", trying btrfs: device or resource busy Dec 13 14:13:53.412751 ignition[1254]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4062058038" Dec 13 14:13:53.426040 ignition[1254]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4062058038" Dec 13 14:13:53.426040 ignition[1254]: INFO : op(6): [started] unmounting "/mnt/oem4062058038" Dec 13 14:13:53.426040 ignition[1254]: INFO : op(6): [finished] unmounting "/mnt/oem4062058038" Dec 13 14:13:53.426040 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:13:53.426040 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:53.426040 ignition[1254]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 14:13:53.819908 ignition[1254]: INFO : GET result: OK Dec 13 14:13:54.336296 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:54.344200 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:13:54.344200 ignition[1254]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:54.353225 ignition[1254]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem825059533" Dec 13 14:13:54.355883 ignition[1254]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem825059533": device or resource busy Dec 13 14:13:54.355883 ignition[1254]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem825059533", trying btrfs: device or resource busy Dec 13 14:13:54.355883 ignition[1254]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem825059533" Dec 13 14:13:54.367394 ignition[1254]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem825059533" Dec 13 14:13:54.367394 ignition[1254]: INFO : op(9): [started] unmounting "/mnt/oem825059533" Dec 13 14:13:54.367394 ignition[1254]: INFO : op(9): [finished] unmounting "/mnt/oem825059533" Dec 13 14:13:54.367394 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:13:54.367394 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:13:54.367394 ignition[1254]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:54.388187 systemd[1]: mnt-oem825059533.mount: Deactivated successfully. Dec 13 14:13:54.411887 ignition[1254]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem624558180" Dec 13 14:13:54.414602 ignition[1254]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem624558180": device or resource busy Dec 13 14:13:54.414602 ignition[1254]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem624558180", trying btrfs: device or resource busy Dec 13 14:13:54.414602 ignition[1254]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem624558180" Dec 13 14:13:54.423436 ignition[1254]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem624558180" Dec 13 14:13:54.423436 ignition[1254]: INFO : op(c): [started] unmounting "/mnt/oem624558180" Dec 13 14:13:54.423436 ignition[1254]: INFO : op(c): [finished] unmounting "/mnt/oem624558180" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: op(11): [started] processing unit "nvidia.service" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: op(11): [finished] processing unit "nvidia.service" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: op(12): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: op(12): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: op(13): [started] processing unit "amazon-ssm-agent.service" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: op(13): op(14): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: op(13): op(14): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: op(13): [finished] processing unit "amazon-ssm-agent.service" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: op(15): [started] processing unit "containerd.service" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: op(15): op(16): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: op(15): op(16): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: op(15): [finished] processing unit "containerd.service" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: op(17): [started] processing unit "prepare-helm.service" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: op(17): op(18): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:13:54.423436 ignition[1254]: INFO : files: op(17): op(18): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:13:54.475761 ignition[1254]: INFO : files: op(17): [finished] processing unit "prepare-helm.service" Dec 13 14:13:54.475761 ignition[1254]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Dec 13 14:13:54.475761 ignition[1254]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:13:54.475761 ignition[1254]: INFO : files: op(1a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:13:54.475761 ignition[1254]: INFO : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:13:54.475761 ignition[1254]: INFO : files: op(1b): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:13:54.475761 ignition[1254]: INFO : files: op(1b): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:13:54.475761 ignition[1254]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:13:54.475761 ignition[1254]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:13:54.505597 ignition[1254]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:13:54.509785 ignition[1254]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:13:54.509785 ignition[1254]: INFO : files: files passed Dec 13 14:13:54.509785 ignition[1254]: INFO : Ignition finished successfully Dec 13 14:13:54.516550 systemd[1]: Finished ignition-files.service. Dec 13 14:13:54.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.529728 kernel: audit: type=1130 audit(1734099234.518:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.533123 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:13:54.537016 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:13:54.540939 systemd[1]: Starting ignition-quench.service... Dec 13 14:13:54.547376 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:13:54.549504 systemd[1]: Finished ignition-quench.service. Dec 13 14:13:54.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.567452 kernel: audit: type=1130 audit(1734099234.551:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.567496 kernel: audit: type=1131 audit(1734099234.551:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.573380 initrd-setup-root-after-ignition[1279]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:13:54.577492 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:13:54.581416 systemd[1]: Reached target ignition-complete.target. Dec 13 14:13:54.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.585881 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:13:54.614494 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:13:54.616578 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:13:54.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.620039 systemd[1]: Reached target initrd-fs.target. Dec 13 14:13:54.623015 systemd[1]: Reached target initrd.target. Dec 13 14:13:54.625811 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.629483 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:13:54.652569 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:13:54.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.655761 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:13:54.680078 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:13:54.683318 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:13:54.686755 systemd[1]: Stopped target timers.target. Dec 13 14:13:54.689587 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:13:54.691546 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:13:54.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.694613 systemd[1]: Stopped target initrd.target. Dec 13 14:13:54.697995 systemd[1]: Stopped target basic.target. Dec 13 14:13:54.703305 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:13:54.706546 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:13:54.709753 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:13:54.714555 systemd[1]: Stopped target remote-fs.target. Dec 13 14:13:54.718013 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:13:54.721109 systemd[1]: Stopped target sysinit.target. Dec 13 14:13:54.726334 systemd[1]: Stopped target local-fs.target. Dec 13 14:13:54.729171 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:13:54.734383 systemd[1]: Stopped target swap.target. Dec 13 14:13:54.737078 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:13:54.738886 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:13:54.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.741979 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:13:54.744837 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:13:54.746740 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:13:54.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.749860 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:13:54.752122 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:13:54.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.755792 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:13:54.757675 systemd[1]: Stopped ignition-files.service. Dec 13 14:13:54.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.765797 iscsid[1104]: iscsid shutting down. Dec 13 14:13:54.762035 systemd[1]: Stopping ignition-mount.service... Dec 13 14:13:54.764344 systemd[1]: Stopping iscsid.service... Dec 13 14:13:54.778268 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:13:54.779660 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:13:54.784277 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:13:54.802976 ignition[1292]: INFO : Ignition 2.14.0 Dec 13 14:13:54.802976 ignition[1292]: INFO : Stage: umount Dec 13 14:13:54.802976 ignition[1292]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:54.802976 ignition[1292]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:54.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.806942 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:13:54.824855 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:13:54.838425 ignition[1292]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:54.838425 ignition[1292]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:54.842950 ignition[1292]: INFO : PUT result: OK Dec 13 14:13:54.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.851073 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:13:54.853494 ignition[1292]: INFO : umount: umount passed Dec 13 14:13:54.855154 ignition[1292]: INFO : Ignition finished successfully Dec 13 14:13:54.858103 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:13:54.859801 systemd[1]: Stopped iscsid.service. Dec 13 14:13:54.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.863610 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:13:54.865452 systemd[1]: Stopped ignition-mount.service. Dec 13 14:13:54.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.870665 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:13:54.870867 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:13:54.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.877661 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:13:54.877795 systemd[1]: Stopped ignition-disks.service. Dec 13 14:13:54.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.882541 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:13:54.882645 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:13:54.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.899364 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:13:54.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.899447 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:13:54.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.901087 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:13:54.902748 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:13:54.904526 systemd[1]: Stopped target paths.target. Dec 13 14:13:54.914087 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:13:54.917773 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:13:54.921012 systemd[1]: Stopped target slices.target. Dec 13 14:13:54.933955 systemd[1]: Stopped target sockets.target. Dec 13 14:13:54.938786 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:13:54.938886 systemd[1]: Closed iscsid.socket. Dec 13 14:13:54.956441 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:13:54.956558 systemd[1]: Stopped ignition-setup.service. Dec 13 14:13:54.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.972732 systemd[1]: Stopping iscsiuio.service... Dec 13 14:13:54.979397 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:13:54.980266 systemd[1]: Stopped iscsiuio.service. Dec 13 14:13:54.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.987075 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:13:54.987897 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:13:54.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.991241 systemd[1]: Stopped target network.target. Dec 13 14:13:54.992941 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:13:54.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.993009 systemd[1]: Closed iscsiuio.socket. Dec 13 14:13:54.994368 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:13:54.994611 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:13:54.996108 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:13:55.006310 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:13:55.015909 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:13:55.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.016122 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:13:55.017746 systemd-networkd[1099]: eth0: DHCPv6 lease lost Dec 13 14:13:55.021000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:13:55.024052 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:13:55.024382 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:13:55.025000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:13:55.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.029882 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:13:55.029961 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:13:55.036586 systemd[1]: Stopping network-cleanup.service... Dec 13 14:13:55.038849 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:13:55.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.039883 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:13:55.042448 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:13:55.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.042536 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:13:55.045388 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:13:55.045538 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:13:55.055257 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:13:55.060330 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:13:55.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.068455 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:13:55.068766 systemd[1]: Stopped network-cleanup.service. Dec 13 14:13:55.083845 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:13:55.085817 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:13:55.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.086560 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:13:55.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.086638 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:13:55.087546 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:13:55.087615 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:13:55.088369 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:13:55.088447 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:13:55.088805 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:13:55.088880 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:13:55.089048 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:13:55.089117 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:13:55.090531 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:13:55.137000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:13:55.137000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:13:55.140000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:13:55.140000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:13:55.140000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:13:55.091055 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:13:55.091184 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:13:55.101246 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:13:55.101348 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:13:55.103523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:13:55.103609 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:13:55.109062 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:13:55.109263 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:13:55.176620 systemd-journald[309]: Received SIGTERM from PID 1 (n/a). Dec 13 14:13:55.111340 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:13:55.113605 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:13:55.137485 systemd[1]: Switching root. Dec 13 14:13:55.181815 systemd-journald[309]: Journal stopped Dec 13 14:14:00.001273 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:14:00.001393 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:14:00.001434 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:14:00.001472 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:14:00.001503 kernel: SELinux: policy capability open_perms=1 Dec 13 14:14:00.001533 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:14:00.001572 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:14:00.001601 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:14:00.001630 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:14:00.001661 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:14:00.002341 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:14:00.002404 systemd[1]: Successfully loaded SELinux policy in 76.690ms. Dec 13 14:14:00.017908 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.118ms. Dec 13 14:14:00.017962 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:14:00.017995 systemd[1]: Detected virtualization amazon. Dec 13 14:14:00.018029 systemd[1]: Detected architecture arm64. Dec 13 14:14:00.018061 systemd[1]: Detected first boot. Dec 13 14:14:00.018094 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:14:00.018135 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:14:00.018169 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:14:00.018202 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:14:00.018236 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:14:00.018272 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:14:00.018316 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:14:00.018352 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:14:00.018385 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:14:00.018425 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:14:00.018457 systemd[1]: Created slice system-getty.slice. Dec 13 14:14:00.018486 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:14:00.030931 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:14:00.030977 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:14:00.031011 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:14:00.031053 systemd[1]: Created slice user.slice. Dec 13 14:14:00.031086 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:14:00.031117 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:14:00.031159 systemd[1]: Set up automount boot.automount. Dec 13 14:14:00.031193 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:14:00.031226 systemd[1]: Reached target integritysetup.target. Dec 13 14:14:00.031259 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:14:00.031295 systemd[1]: Reached target remote-fs.target. Dec 13 14:14:00.031330 systemd[1]: Reached target slices.target. Dec 13 14:14:00.031361 systemd[1]: Reached target swap.target. Dec 13 14:14:00.031392 systemd[1]: Reached target torcx.target. Dec 13 14:14:00.031428 systemd[1]: Reached target veritysetup.target. Dec 13 14:14:00.031458 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:14:00.031492 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:14:00.031524 kernel: kauditd_printk_skb: 56 callbacks suppressed Dec 13 14:14:00.031560 kernel: audit: type=1400 audit(1734099239.573:89): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:14:00.031593 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:14:00.031626 kernel: audit: type=1335 audit(1734099239.581:90): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:14:00.031661 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:14:00.031739 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:14:00.031778 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:14:00.031811 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:14:00.031843 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:14:00.031874 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:14:00.031906 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:14:00.031938 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:14:00.031968 systemd[1]: Mounting media.mount... Dec 13 14:14:00.032000 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:14:00.032035 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:14:00.032066 systemd[1]: Mounting tmp.mount... Dec 13 14:14:00.032097 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:14:00.032129 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:14:00.032161 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:14:00.032192 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:14:00.032223 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:14:00.032255 systemd[1]: Starting modprobe@drm.service... Dec 13 14:14:00.032285 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:14:00.032319 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:14:00.032352 systemd[1]: Starting modprobe@loop.service... Dec 13 14:14:00.032383 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:14:00.032414 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:14:00.032468 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:14:00.032502 systemd[1]: Starting systemd-journald.service... Dec 13 14:14:00.032533 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:14:00.032562 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:14:00.032598 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:14:00.032628 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:14:00.032660 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:14:00.032689 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:14:00.032747 systemd[1]: Mounted media.mount. Dec 13 14:14:00.032778 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:14:00.032812 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:14:00.032843 systemd[1]: Mounted tmp.mount. Dec 13 14:14:00.032875 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:14:00.032909 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:14:00.032940 kernel: audit: type=1130 audit(1734099239.852:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.032970 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:14:00.032999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:14:00.033030 kernel: audit: type=1130 audit(1734099239.868:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.033060 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:14:00.033088 kernel: fuse: init (API version 7.34) Dec 13 14:14:00.033120 kernel: audit: type=1131 audit(1734099239.868:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.033154 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:14:00.033184 kernel: loop: module loaded Dec 13 14:14:00.033212 systemd[1]: Finished modprobe@drm.service. Dec 13 14:14:00.033244 kernel: audit: type=1130 audit(1734099239.892:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.033276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:14:00.033309 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:14:00.033341 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:14:00.033371 kernel: audit: type=1131 audit(1734099239.892:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.033406 kernel: audit: type=1130 audit(1734099239.908:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.033436 systemd[1]: Finished modprobe@loop.service. Dec 13 14:14:00.033468 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:14:00.033498 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:14:00.033530 kernel: audit: type=1131 audit(1734099239.908:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.033565 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:14:00.033598 kernel: audit: type=1130 audit(1734099239.918:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.033627 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:14:00.033656 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:14:00.033686 systemd[1]: Reached target network-pre.target. Dec 13 14:14:00.045008 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:14:00.045060 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:14:00.045094 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:14:00.045138 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:14:00.045171 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:14:00.045202 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:14:00.045234 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:14:00.045273 systemd-journald[1443]: Journal started Dec 13 14:14:00.045377 systemd-journald[1443]: Runtime Journal (/run/log/journal/ec2d8f980b5d9c7852e63b7c8f0ccec3) is 8.0M, max 75.4M, 67.4M free. Dec 13 14:13:59.581000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:13:59.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:59.994000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:14:00.054908 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:13:59.994000 audit[1443]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffcfe7e3a0 a2=4000 a3=1 items=0 ppid=1 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:13:59.994000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:14:00.064763 systemd[1]: Started systemd-journald.service. Dec 13 14:14:00.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.062768 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:14:00.064672 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:14:00.070919 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:14:00.089664 systemd-journald[1443]: Time spent on flushing to /var/log/journal/ec2d8f980b5d9c7852e63b7c8f0ccec3 is 83.316ms for 1078 entries. Dec 13 14:14:00.089664 systemd-journald[1443]: System Journal (/var/log/journal/ec2d8f980b5d9c7852e63b7c8f0ccec3) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:14:00.195383 systemd-journald[1443]: Received client request to flush runtime journal. Dec 13 14:14:00.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.100166 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:14:00.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.102052 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:14:00.126059 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:14:00.159959 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:14:00.164312 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:14:00.197073 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:14:00.208872 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:14:00.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.213301 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:14:00.232579 udevadm[1499]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:14:00.255443 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:14:00.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.259572 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:14:00.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.319826 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:14:00.938398 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:14:00.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:00.942655 systemd[1]: Starting systemd-udevd.service... Dec 13 14:14:00.984209 systemd-udevd[1505]: Using default interface naming scheme 'v252'. Dec 13 14:14:01.028492 systemd[1]: Started systemd-udevd.service. Dec 13 14:14:01.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.033246 systemd[1]: Starting systemd-networkd.service... Dec 13 14:14:01.045488 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:14:01.113978 systemd[1]: Found device dev-ttyS0.device. Dec 13 14:14:01.161041 systemd[1]: Started systemd-userdbd.service. Dec 13 14:14:01.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.175498 (udev-worker)[1523]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:14:01.324850 systemd-networkd[1512]: lo: Link UP Dec 13 14:14:01.331594 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1523) Dec 13 14:14:01.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.324865 systemd-networkd[1512]: lo: Gained carrier Dec 13 14:14:01.325752 systemd-networkd[1512]: Enumeration completed Dec 13 14:14:01.325942 systemd[1]: Started systemd-networkd.service. Dec 13 14:14:01.331220 systemd-networkd[1512]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:14:01.338165 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:14:01.337833 systemd-networkd[1512]: eth0: Link UP Dec 13 14:14:01.338122 systemd-networkd[1512]: eth0: Gained carrier Dec 13 14:14:01.346657 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:14:01.369937 systemd-networkd[1512]: eth0: DHCPv4 address 172.31.27.214/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:14:01.518417 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 14:14:01.519187 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:14:01.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.542419 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:14:01.566519 lvm[1623]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:14:01.604584 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:14:01.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.607278 systemd[1]: Reached target cryptsetup.target. Dec 13 14:14:01.611754 systemd[1]: Starting lvm2-activation.service... Dec 13 14:14:01.621907 lvm[1625]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:14:01.658577 systemd[1]: Finished lvm2-activation.service. Dec 13 14:14:01.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.660813 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:14:01.662615 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:14:01.662829 systemd[1]: Reached target local-fs.target. Dec 13 14:14:01.664882 systemd[1]: Reached target machines.target. Dec 13 14:14:01.669243 systemd[1]: Starting ldconfig.service... Dec 13 14:14:01.672337 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:14:01.672495 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:14:01.674966 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:14:01.678550 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:14:01.683142 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:14:01.687499 systemd[1]: Starting systemd-sysext.service... Dec 13 14:14:01.701785 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1628 (bootctl) Dec 13 14:14:01.704186 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:14:01.725076 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:14:01.734878 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:14:01.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.735424 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:14:01.738042 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:14:01.773752 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 14:14:01.863618 systemd-fsck[1640]: fsck.fat 4.2 (2021-01-31) Dec 13 14:14:01.863618 systemd-fsck[1640]: /dev/nvme0n1p1: 236 files, 117175/258078 clusters Dec 13 14:14:01.868658 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:14:01.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.873393 systemd[1]: Mounting boot.mount... Dec 13 14:14:01.907736 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:14:01.908799 systemd[1]: Mounted boot.mount. Dec 13 14:14:01.945961 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 14:14:01.953896 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:14:01.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.979101 (sd-sysext)[1659]: Using extensions 'kubernetes'. Dec 13 14:14:01.980068 (sd-sysext)[1659]: Merged extensions into '/usr'. Dec 13 14:14:02.027832 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:14:02.029848 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:14:02.033122 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:14:02.043592 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:14:02.049919 systemd[1]: Starting modprobe@loop.service... Dec 13 14:14:02.052136 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:14:02.052540 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:14:02.060397 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:14:02.066108 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:14:02.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.070502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:14:02.070982 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:14:02.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.080972 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:14:02.085898 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:14:02.086342 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:14:02.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.091227 systemd[1]: Finished systemd-sysext.service. Dec 13 14:14:02.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.097931 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:14:02.098415 systemd[1]: Finished modprobe@loop.service. Dec 13 14:14:02.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.108870 systemd[1]: Starting ensure-sysext.service... Dec 13 14:14:02.110472 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:14:02.110654 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:14:02.113991 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:14:02.148209 systemd[1]: Reloading. Dec 13 14:14:02.160473 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:14:02.164203 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:14:02.169172 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:14:02.316910 /usr/lib/systemd/system-generators/torcx-generator[1697]: time="2024-12-13T14:14:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:14:02.316983 /usr/lib/systemd/system-generators/torcx-generator[1697]: time="2024-12-13T14:14:02Z" level=info msg="torcx already run" Dec 13 14:14:02.616769 ldconfig[1627]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:14:02.628214 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:14:02.628259 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:14:02.679355 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:14:02.819928 systemd[1]: Finished ldconfig.service. Dec 13 14:14:02.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.823651 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:14:02.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.833126 systemd[1]: Starting audit-rules.service... Dec 13 14:14:02.839090 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:14:02.846734 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:14:02.852121 systemd[1]: Starting systemd-resolved.service... Dec 13 14:14:02.863784 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:14:02.872975 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:14:02.881244 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:14:02.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.888498 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:14:02.893290 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:14:02.897823 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:14:02.904857 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:14:02.909257 systemd[1]: Starting modprobe@loop.service... Dec 13 14:14:02.911063 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:14:02.911331 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:14:02.911589 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:14:02.913575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:14:02.914002 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:14:02.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.922043 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:14:02.927635 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:14:02.932000 audit[1771]: SYSTEM_BOOT pid=1771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.935038 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:14:02.935332 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:14:02.935592 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:14:02.937279 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:14:02.937657 systemd[1]: Finished modprobe@loop.service. Dec 13 14:14:02.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.943284 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:14:02.943660 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:14:02.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.950896 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:14:02.951711 systemd-networkd[1512]: eth0: Gained IPv6LL Dec 13 14:14:02.959370 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:14:02.965986 systemd[1]: Starting modprobe@drm.service... Dec 13 14:14:02.970188 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:14:02.979691 systemd[1]: Starting modprobe@loop.service... Dec 13 14:14:02.981520 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:14:02.981853 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:14:02.982187 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:14:02.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.991245 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:14:02.994298 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:14:02.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.997054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:14:02.997441 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:14:02.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.004344 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:14:03.004801 systemd[1]: Finished modprobe@drm.service. Dec 13 14:14:03.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.014861 systemd[1]: Finished ensure-sysext.service. Dec 13 14:14:03.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.032940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:14:03.033341 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:14:03.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.035241 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:14:03.036813 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:14:03.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.041515 systemd[1]: Starting systemd-update-done.service... Dec 13 14:14:03.045621 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:14:03.046124 systemd[1]: Finished modprobe@loop.service. Dec 13 14:14:03.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.048124 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:14:03.078531 systemd[1]: Finished systemd-update-done.service. Dec 13 14:14:03.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.111000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:14:03.111000 audit[1800]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffeb17fe20 a2=420 a3=0 items=0 ppid=1759 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:03.111000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:14:03.113185 augenrules[1800]: No rules Dec 13 14:14:03.114328 systemd[1]: Finished audit-rules.service. Dec 13 14:14:03.172869 systemd-resolved[1763]: Positive Trust Anchors: Dec 13 14:14:03.172898 systemd-resolved[1763]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:14:03.172952 systemd-resolved[1763]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:14:03.185492 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:14:03.187483 systemd[1]: Reached target time-set.target. Dec 13 14:14:03.205938 systemd-resolved[1763]: Defaulting to hostname 'linux'. Dec 13 14:14:03.208884 systemd[1]: Started systemd-resolved.service. Dec 13 14:14:03.210593 systemd[1]: Reached target network.target. Dec 13 14:14:03.212121 systemd[1]: Reached target network-online.target. Dec 13 14:14:03.213764 systemd[1]: Reached target nss-lookup.target. Dec 13 14:14:03.215322 systemd[1]: Reached target sysinit.target. Dec 13 14:14:03.216942 systemd[1]: Started motdgen.path. Dec 13 14:14:03.218316 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:14:03.220652 systemd[1]: Started logrotate.timer. Dec 13 14:14:03.222180 systemd[1]: Started mdadm.timer. Dec 13 14:14:03.223467 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:14:03.225092 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:14:03.225148 systemd[1]: Reached target paths.target. Dec 13 14:14:03.226542 systemd[1]: Reached target timers.target. Dec 13 14:14:03.235824 systemd[1]: Listening on dbus.socket. Dec 13 14:14:03.239449 systemd[1]: Starting docker.socket... Dec 13 14:14:03.243646 systemd[1]: Listening on sshd.socket. Dec 13 14:14:03.245302 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:14:03.245935 systemd[1]: Listening on docker.socket. Dec 13 14:14:03.247485 systemd[1]: Reached target sockets.target. Dec 13 14:14:02.792416 systemd-resolved[1763]: Clock change detected. Flushing caches. Dec 13 14:14:02.965033 systemd-journald[1443]: Time jumped backwards, rotating. Dec 13 14:14:02.792615 systemd[1]: Reached target basic.target. Dec 13 14:14:02.798442 systemd-timesyncd[1770]: Contacted time server 137.190.2.4:123 (0.flatcar.pool.ntp.org). Dec 13 14:14:02.798539 systemd-timesyncd[1770]: Initial clock synchronization to Fri 2024-12-13 14:14:02.792341 UTC. Dec 13 14:14:02.799566 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:14:02.799662 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:14:02.974769 jq[1815]: false Dec 13 14:14:02.799717 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:14:02.802138 systemd[1]: Started amazon-ssm-agent.service. Dec 13 14:14:02.806478 systemd[1]: Starting containerd.service... Dec 13 14:14:02.810269 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:14:02.814677 systemd[1]: Starting dbus.service... Dec 13 14:14:02.819575 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:14:02.837284 systemd[1]: Starting extend-filesystems.service... Dec 13 14:14:02.838886 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:14:03.073577 jq[1841]: true Dec 13 14:14:02.845361 systemd[1]: Starting kubelet.service... Dec 13 14:14:02.869989 systemd[1]: Starting motdgen.service... Dec 13 14:14:02.874162 systemd[1]: Started nvidia.service. Dec 13 14:14:03.091616 tar[1848]: linux-arm64/helm Dec 13 14:14:02.884184 systemd[1]: Starting prepare-helm.service... Dec 13 14:14:02.902380 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:14:02.913515 systemd[1]: Starting sshd-keygen.service... Dec 13 14:14:02.930347 systemd[1]: Starting systemd-logind.service... Dec 13 14:14:03.128064 jq[1852]: true Dec 13 14:14:02.931875 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:14:02.932016 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:14:02.935005 systemd[1]: Starting update-engine.service... Dec 13 14:14:03.015009 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:14:03.147850 dbus-daemon[1814]: [system] SELinux support is enabled Dec 13 14:14:03.024039 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:14:03.024596 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:14:03.027977 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:14:03.030090 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:14:03.148239 systemd[1]: Started dbus.service. Dec 13 14:14:03.153041 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:14:03.153118 systemd[1]: Reached target system-config.target. Dec 13 14:14:03.154894 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:14:03.154939 systemd[1]: Reached target user-config.target. Dec 13 14:14:03.166111 extend-filesystems[1818]: Found loop1 Dec 13 14:14:03.166111 extend-filesystems[1818]: Found nvme0n1 Dec 13 14:14:03.166111 extend-filesystems[1818]: Found nvme0n1p1 Dec 13 14:14:03.166111 extend-filesystems[1818]: Found nvme0n1p2 Dec 13 14:14:03.185606 extend-filesystems[1818]: Found nvme0n1p3 Dec 13 14:14:03.185606 extend-filesystems[1818]: Found usr Dec 13 14:14:03.185606 extend-filesystems[1818]: Found nvme0n1p4 Dec 13 14:14:03.185606 extend-filesystems[1818]: Found nvme0n1p6 Dec 13 14:14:03.185606 extend-filesystems[1818]: Found nvme0n1p7 Dec 13 14:14:03.185606 extend-filesystems[1818]: Found nvme0n1p9 Dec 13 14:14:03.185606 extend-filesystems[1818]: Checking size of /dev/nvme0n1p9 Dec 13 14:14:03.177430 dbus-daemon[1814]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1512 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:14:03.195765 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:14:03.187700 dbus-daemon[1814]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:14:03.202876 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:14:03.204116 systemd[1]: Finished motdgen.service. Dec 13 14:14:03.251110 extend-filesystems[1818]: Resized partition /dev/nvme0n1p9 Dec 13 14:14:03.259137 extend-filesystems[1881]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:14:03.303098 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 14:14:03.324416 amazon-ssm-agent[1810]: 2024/12/13 14:14:03 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 14:14:03.325464 amazon-ssm-agent[1810]: Initializing new seelog logger Dec 13 14:14:03.326253 amazon-ssm-agent[1810]: New Seelog Logger Creation Complete Dec 13 14:14:03.328809 amazon-ssm-agent[1810]: 2024/12/13 14:14:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:14:03.329048 amazon-ssm-agent[1810]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:14:03.330099 amazon-ssm-agent[1810]: 2024/12/13 14:14:03 processing appconfig overrides Dec 13 14:14:03.385101 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 14:14:03.399425 extend-filesystems[1881]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:14:03.399425 extend-filesystems[1881]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:14:03.399425 extend-filesystems[1881]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 14:14:03.407326 extend-filesystems[1818]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:14:03.422381 bash[1901]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:14:03.406996 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:14:03.423225 update_engine[1834]: I1213 14:14:03.422199 1834 main.cc:92] Flatcar Update Engine starting Dec 13 14:14:03.446521 update_engine[1834]: I1213 14:14:03.440203 1834 update_check_scheduler.cc:74] Next update check in 9m49s Dec 13 14:14:03.444632 systemd[1]: Finished extend-filesystems.service. Dec 13 14:14:03.447301 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:14:03.449375 systemd[1]: Started update-engine.service. Dec 13 14:14:03.454596 systemd[1]: Started locksmithd.service. Dec 13 14:14:03.542814 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:14:03.572786 systemd-logind[1832]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 14:14:03.574978 systemd-logind[1832]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 14:14:03.575576 systemd-logind[1832]: New seat seat0. Dec 13 14:14:03.589638 systemd[1]: Started systemd-logind.service. Dec 13 14:14:03.621060 env[1849]: time="2024-12-13T14:14:03.619762929Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:14:03.833257 env[1849]: time="2024-12-13T14:14:03.833120998Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:14:03.833459 env[1849]: time="2024-12-13T14:14:03.833394022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:14:03.844345 env[1849]: time="2024-12-13T14:14:03.844261510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:14:03.844345 env[1849]: time="2024-12-13T14:14:03.844333654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:14:03.844880 env[1849]: time="2024-12-13T14:14:03.844818478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:14:03.844971 env[1849]: time="2024-12-13T14:14:03.844873546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:14:03.844971 env[1849]: time="2024-12-13T14:14:03.844916638Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:14:03.844971 env[1849]: time="2024-12-13T14:14:03.844942714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:14:03.845281 env[1849]: time="2024-12-13T14:14:03.845155522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:14:03.845744 env[1849]: time="2024-12-13T14:14:03.845684842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:14:03.846174 env[1849]: time="2024-12-13T14:14:03.846115882Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:14:03.846296 env[1849]: time="2024-12-13T14:14:03.846168982Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:14:03.851286 env[1849]: time="2024-12-13T14:14:03.851207290Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:14:03.851286 env[1849]: time="2024-12-13T14:14:03.851264638Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:14:03.896530 dbus-daemon[1814]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:14:03.896757 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:14:03.908553 dbus-daemon[1814]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1867 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:14:03.913574 systemd[1]: Starting polkit.service... Dec 13 14:14:03.922706 env[1849]: time="2024-12-13T14:14:03.922627618Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:14:03.922868 env[1849]: time="2024-12-13T14:14:03.922706506Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:14:03.922868 env[1849]: time="2024-12-13T14:14:03.922741294Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:14:03.922868 env[1849]: time="2024-12-13T14:14:03.922810294Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:14:03.922868 env[1849]: time="2024-12-13T14:14:03.922848214Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:14:03.923105 env[1849]: time="2024-12-13T14:14:03.922881214Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:14:03.923105 env[1849]: time="2024-12-13T14:14:03.922913290Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:14:03.923530 env[1849]: time="2024-12-13T14:14:03.923483554Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:14:03.923644 env[1849]: time="2024-12-13T14:14:03.923537890Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:14:03.923644 env[1849]: time="2024-12-13T14:14:03.923571994Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:14:03.923644 env[1849]: time="2024-12-13T14:14:03.923603098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:14:03.923644 env[1849]: time="2024-12-13T14:14:03.923637358Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:14:03.923902 env[1849]: time="2024-12-13T14:14:03.923862898Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:14:03.924082 env[1849]: time="2024-12-13T14:14:03.924038242Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:14:03.924784 env[1849]: time="2024-12-13T14:14:03.924620758Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:14:03.924784 env[1849]: time="2024-12-13T14:14:03.924713470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:14:03.924784 env[1849]: time="2024-12-13T14:14:03.924753202Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:14:03.925044 env[1849]: time="2024-12-13T14:14:03.924854878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:14:03.925044 env[1849]: time="2024-12-13T14:14:03.924900946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:14:03.925044 env[1849]: time="2024-12-13T14:14:03.924933406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:14:03.925044 env[1849]: time="2024-12-13T14:14:03.924961786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:14:03.925044 env[1849]: time="2024-12-13T14:14:03.924992266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:14:03.925044 env[1849]: time="2024-12-13T14:14:03.925021786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:14:03.925356 env[1849]: time="2024-12-13T14:14:03.925052830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:14:03.925356 env[1849]: time="2024-12-13T14:14:03.925119010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:14:03.925356 env[1849]: time="2024-12-13T14:14:03.925156294Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:14:03.925527 env[1849]: time="2024-12-13T14:14:03.925474486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:14:03.925590 env[1849]: time="2024-12-13T14:14:03.925517314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:14:03.925590 env[1849]: time="2024-12-13T14:14:03.925549114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:14:03.925590 env[1849]: time="2024-12-13T14:14:03.925577938Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:14:03.925728 env[1849]: time="2024-12-13T14:14:03.925609918Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:14:03.925728 env[1849]: time="2024-12-13T14:14:03.925636894Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:14:03.925728 env[1849]: time="2024-12-13T14:14:03.925670710Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:14:03.925895 env[1849]: time="2024-12-13T14:14:03.925733878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:14:03.926208 env[1849]: time="2024-12-13T14:14:03.926098174Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:14:03.927863 env[1849]: time="2024-12-13T14:14:03.926213182Z" level=info msg="Connect containerd service" Dec 13 14:14:03.927863 env[1849]: time="2024-12-13T14:14:03.926279302Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:14:03.929136 env[1849]: time="2024-12-13T14:14:03.928897918Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:14:03.930912 env[1849]: time="2024-12-13T14:14:03.930782722Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:14:03.934972 env[1849]: time="2024-12-13T14:14:03.930869038Z" level=info msg="Start subscribing containerd event" Dec 13 14:14:03.936175 env[1849]: time="2024-12-13T14:14:03.936116074Z" level=info msg="Start recovering state" Dec 13 14:14:03.937336 env[1849]: time="2024-12-13T14:14:03.937154422Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:14:03.937835 systemd[1]: Started containerd.service. Dec 13 14:14:03.944759 env[1849]: time="2024-12-13T14:14:03.944692642Z" level=info msg="Start event monitor" Dec 13 14:14:03.944969 env[1849]: time="2024-12-13T14:14:03.944938486Z" level=info msg="Start snapshots syncer" Dec 13 14:14:03.945141 env[1849]: time="2024-12-13T14:14:03.945101758Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:14:03.945259 env[1849]: time="2024-12-13T14:14:03.945232222Z" level=info msg="Start streaming server" Dec 13 14:14:03.960762 env[1849]: time="2024-12-13T14:14:03.960707639Z" level=info msg="containerd successfully booted in 0.440272s" Dec 13 14:14:03.961699 polkitd[1962]: Started polkitd version 121 Dec 13 14:14:04.000592 polkitd[1962]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:14:04.000711 polkitd[1962]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:14:04.021470 polkitd[1962]: Finished loading, compiling and executing 2 rules Dec 13 14:14:04.022992 dbus-daemon[1814]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:14:04.023467 systemd[1]: Started polkit.service. Dec 13 14:14:04.028175 polkitd[1962]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:14:04.061005 systemd-resolved[1763]: System hostname changed to 'ip-172-31-27-214'. Dec 13 14:14:04.061008 systemd-hostnamed[1867]: Hostname set to (transient) Dec 13 14:14:04.085679 coreos-metadata[1813]: Dec 13 14:14:04.085 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:14:04.087523 coreos-metadata[1813]: Dec 13 14:14:04.087 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 14:14:04.094442 coreos-metadata[1813]: Dec 13 14:14:04.091 INFO Fetch successful Dec 13 14:14:04.094442 coreos-metadata[1813]: Dec 13 14:14:04.091 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:14:04.099012 coreos-metadata[1813]: Dec 13 14:14:04.098 INFO Fetch successful Dec 13 14:14:04.111016 unknown[1813]: wrote ssh authorized keys file for user: core Dec 13 14:14:04.130920 update-ssh-keys[1987]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:14:04.132332 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:14:04.160147 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO Create new startup processor Dec 13 14:14:04.165493 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 14:14:04.166379 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO Initializing bookkeeping folders Dec 13 14:14:04.166523 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO removing the completed state files Dec 13 14:14:04.166634 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO Initializing bookkeeping folders for long running plugins Dec 13 14:14:04.166745 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 14:14:04.166876 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO Initializing healthcheck folders for long running plugins Dec 13 14:14:04.166984 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO Initializing locations for inventory plugin Dec 13 14:14:04.167132 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO Initializing default location for custom inventory Dec 13 14:14:04.167256 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO Initializing default location for file inventory Dec 13 14:14:04.167365 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO Initializing default location for role inventory Dec 13 14:14:04.167474 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO Init the cloudwatchlogs publisher Dec 13 14:14:04.167583 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [instanceID=i-0b5800c5a7c369c4d] Successfully loaded platform independent plugin aws:downloadContent Dec 13 14:14:04.167702 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [instanceID=i-0b5800c5a7c369c4d] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 14:14:04.167835 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [instanceID=i-0b5800c5a7c369c4d] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 14:14:04.167944 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [instanceID=i-0b5800c5a7c369c4d] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 14:14:04.168052 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [instanceID=i-0b5800c5a7c369c4d] Successfully loaded platform independent plugin aws:configureDocker Dec 13 14:14:04.168281 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [instanceID=i-0b5800c5a7c369c4d] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 14:14:04.168390 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [instanceID=i-0b5800c5a7c369c4d] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 14:14:04.168498 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [instanceID=i-0b5800c5a7c369c4d] Successfully loaded platform independent plugin aws:configurePackage Dec 13 14:14:04.168621 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [instanceID=i-0b5800c5a7c369c4d] Successfully loaded platform independent plugin aws:runDocument Dec 13 14:14:04.168729 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [instanceID=i-0b5800c5a7c369c4d] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 14:14:04.168837 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 14:14:04.168953 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO OS: linux, Arch: arm64 Dec 13 14:14:04.169460 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 14:14:04.173577 amazon-ssm-agent[1810]: datastore file /var/lib/amazon/ssm/i-0b5800c5a7c369c4d/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 14:14:04.271364 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 14:14:04.367876 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 14:14:04.462388 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessagingDeliveryService] Starting message polling Dec 13 14:14:04.558825 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 14:14:04.653816 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [instanceID=i-0b5800c5a7c369c4d] Starting association polling Dec 13 14:14:04.748854 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 14:14:04.844174 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 14:14:04.939809 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 14:14:05.002878 tar[1848]: linux-arm64/LICENSE Dec 13 14:14:05.003628 tar[1848]: linux-arm64/README.md Dec 13 14:14:05.021972 systemd[1]: Finished prepare-helm.service. Dec 13 14:14:05.035389 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 14:14:05.118257 locksmithd[1906]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:14:05.131290 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 14:14:05.227451 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:14:05.323692 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 14:14:05.420222 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 14:14:05.516976 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 14:14:05.613812 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 14:14:05.710898 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 14:14:05.808234 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0b5800c5a7c369c4d, requestId: 6a4784a4-7bf2-4711-b964-430aa75dc562 Dec 13 14:14:05.874808 systemd[1]: Started kubelet.service. Dec 13 14:14:05.905667 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [OfflineService] Starting document processing engine... Dec 13 14:14:06.003363 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [OfflineService] [EngineProcessor] Starting Dec 13 14:14:06.101300 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 14:14:06.199282 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [OfflineService] Starting message polling Dec 13 14:14:06.297555 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [OfflineService] Starting send replies to MDS Dec 13 14:14:06.396010 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessageGatewayService] listening reply. Dec 13 14:14:06.494610 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 14:14:06.593481 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [StartupProcessor] Executing startup processor tasks Dec 13 14:14:06.692602 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 14:14:06.791779 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 14:14:06.891263 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 14:14:06.990987 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0b5800c5a7c369c4d?role=subscribe&stream=input Dec 13 14:14:07.090777 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0b5800c5a7c369c4d?role=subscribe&stream=input Dec 13 14:14:07.190779 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 14:14:07.281687 sshd_keygen[1865]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:14:07.291052 amazon-ssm-agent[1810]: 2024-12-13 14:14:04 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 14:14:07.324164 systemd[1]: Finished sshd-keygen.service. Dec 13 14:14:07.330421 systemd[1]: Starting issuegen.service... Dec 13 14:14:07.343565 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:14:07.344098 systemd[1]: Finished issuegen.service. Dec 13 14:14:07.349021 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:14:07.368756 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:14:07.373545 systemd[1]: Started getty@tty1.service. Dec 13 14:14:07.379423 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:14:07.382033 systemd[1]: Reached target getty.target. Dec 13 14:14:07.384340 systemd[1]: Reached target multi-user.target. Dec 13 14:14:07.389567 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:14:07.409689 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:14:07.410873 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:14:07.419772 systemd[1]: Startup finished in 9.022s (kernel) + 12.388s (userspace) = 21.410s. Dec 13 14:14:07.441222 kubelet[2040]: E1213 14:14:07.441127 2040 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:07.445448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:07.445863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:12.023525 systemd[1]: Created slice system-sshd.slice. Dec 13 14:14:12.025797 systemd[1]: Started sshd@0-172.31.27.214:22-139.178.89.65:57108.service. Dec 13 14:14:12.221122 sshd[2066]: Accepted publickey for core from 139.178.89.65 port 57108 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:12.227991 sshd[2066]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:12.248858 systemd[1]: Created slice user-500.slice. Dec 13 14:14:12.250912 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:14:12.255879 systemd-logind[1832]: New session 1 of user core. Dec 13 14:14:12.273616 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:14:12.279339 systemd[1]: Starting user@500.service... Dec 13 14:14:12.291193 (systemd)[2071]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:12.470579 systemd[2071]: Queued start job for default target default.target. Dec 13 14:14:12.471013 systemd[2071]: Reached target paths.target. Dec 13 14:14:12.471054 systemd[2071]: Reached target sockets.target. Dec 13 14:14:12.471130 systemd[2071]: Reached target timers.target. Dec 13 14:14:12.471165 systemd[2071]: Reached target basic.target. Dec 13 14:14:12.471359 systemd[1]: Started user@500.service. Dec 13 14:14:12.473239 systemd[1]: Started session-1.scope. Dec 13 14:14:12.474164 systemd[2071]: Reached target default.target. Dec 13 14:14:12.474555 systemd[2071]: Startup finished in 171ms. Dec 13 14:14:12.616516 systemd[1]: Started sshd@1-172.31.27.214:22-139.178.89.65:57116.service. Dec 13 14:14:12.787761 sshd[2080]: Accepted publickey for core from 139.178.89.65 port 57116 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:12.790863 sshd[2080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:12.799722 systemd[1]: Started session-2.scope. Dec 13 14:14:12.800435 systemd-logind[1832]: New session 2 of user core. Dec 13 14:14:12.930634 sshd[2080]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:12.935573 systemd[1]: sshd@1-172.31.27.214:22-139.178.89.65:57116.service: Deactivated successfully. Dec 13 14:14:12.937378 systemd-logind[1832]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:14:12.937491 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:14:12.940004 systemd-logind[1832]: Removed session 2. Dec 13 14:14:12.955949 systemd[1]: Started sshd@2-172.31.27.214:22-139.178.89.65:57122.service. Dec 13 14:14:13.125222 sshd[2087]: Accepted publickey for core from 139.178.89.65 port 57122 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:13.128225 sshd[2087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:13.136172 systemd-logind[1832]: New session 3 of user core. Dec 13 14:14:13.136671 systemd[1]: Started session-3.scope. Dec 13 14:14:13.258634 sshd[2087]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:13.263981 systemd[1]: sshd@2-172.31.27.214:22-139.178.89.65:57122.service: Deactivated successfully. Dec 13 14:14:13.265661 systemd-logind[1832]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:14:13.265837 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:14:13.268679 systemd-logind[1832]: Removed session 3. Dec 13 14:14:13.283453 systemd[1]: Started sshd@3-172.31.27.214:22-139.178.89.65:57130.service. Dec 13 14:14:13.452173 sshd[2094]: Accepted publickey for core from 139.178.89.65 port 57130 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:13.454615 sshd[2094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:13.462579 systemd-logind[1832]: New session 4 of user core. Dec 13 14:14:13.463461 systemd[1]: Started session-4.scope. Dec 13 14:14:13.593642 sshd[2094]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:13.599179 systemd[1]: sshd@3-172.31.27.214:22-139.178.89.65:57130.service: Deactivated successfully. Dec 13 14:14:13.601770 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:14:13.602797 systemd-logind[1832]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:14:13.606371 systemd-logind[1832]: Removed session 4. Dec 13 14:14:13.618208 systemd[1]: Started sshd@4-172.31.27.214:22-139.178.89.65:57142.service. Dec 13 14:14:13.787103 sshd[2101]: Accepted publickey for core from 139.178.89.65 port 57142 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:13.790045 sshd[2101]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:13.798272 systemd-logind[1832]: New session 5 of user core. Dec 13 14:14:13.798614 systemd[1]: Started session-5.scope. Dec 13 14:14:13.932169 sudo[2105]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:14:13.932713 sudo[2105]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:14:13.979508 systemd[1]: Starting docker.service... Dec 13 14:14:14.068003 env[2115]: time="2024-12-13T14:14:14.067916369Z" level=info msg="Starting up" Dec 13 14:14:14.071049 env[2115]: time="2024-12-13T14:14:14.071006069Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:14:14.071230 env[2115]: time="2024-12-13T14:14:14.071201417Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:14:14.071369 env[2115]: time="2024-12-13T14:14:14.071338001Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:14:14.071474 env[2115]: time="2024-12-13T14:14:14.071447717Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:14:14.075022 env[2115]: time="2024-12-13T14:14:14.074973629Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:14:14.075022 env[2115]: time="2024-12-13T14:14:14.075013433Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:14:14.075311 env[2115]: time="2024-12-13T14:14:14.075048245Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:14:14.075311 env[2115]: time="2024-12-13T14:14:14.075108365Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:14:14.088418 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport830571559-merged.mount: Deactivated successfully. Dec 13 14:14:14.331512 env[2115]: time="2024-12-13T14:14:14.331386918Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 14:14:14.331760 env[2115]: time="2024-12-13T14:14:14.331727658Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 14:14:14.332171 env[2115]: time="2024-12-13T14:14:14.332138454Z" level=info msg="Loading containers: start." Dec 13 14:14:14.560132 kernel: Initializing XFRM netlink socket Dec 13 14:14:14.608578 env[2115]: time="2024-12-13T14:14:14.608161903Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:14:14.610384 (udev-worker)[2126]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:14:14.728658 systemd-networkd[1512]: docker0: Link UP Dec 13 14:14:14.750634 env[2115]: time="2024-12-13T14:14:14.750568676Z" level=info msg="Loading containers: done." Dec 13 14:14:14.775421 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck324508569-merged.mount: Deactivated successfully. Dec 13 14:14:14.782981 env[2115]: time="2024-12-13T14:14:14.782918672Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:14:14.783597 env[2115]: time="2024-12-13T14:14:14.783567836Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:14:14.783944 env[2115]: time="2024-12-13T14:14:14.783919580Z" level=info msg="Daemon has completed initialization" Dec 13 14:14:14.808215 systemd[1]: Started docker.service. Dec 13 14:14:14.822115 env[2115]: time="2024-12-13T14:14:14.822001592Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:14:16.058816 env[1849]: time="2024-12-13T14:14:16.058753195Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:14:16.651651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322069042.mount: Deactivated successfully. Dec 13 14:14:17.697376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:14:17.697727 systemd[1]: Stopped kubelet.service. Dec 13 14:14:17.701256 systemd[1]: Starting kubelet.service... Dec 13 14:14:17.998231 systemd[1]: Started kubelet.service. Dec 13 14:14:18.116242 kubelet[2249]: E1213 14:14:18.116176 2249 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:18.124239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:18.124638 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:18.793474 env[1849]: time="2024-12-13T14:14:18.793412844Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:18.796012 env[1849]: time="2024-12-13T14:14:18.795950040Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:18.801114 env[1849]: time="2024-12-13T14:14:18.801037944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:18.804839 env[1849]: time="2024-12-13T14:14:18.804790068Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:18.806469 env[1849]: time="2024-12-13T14:14:18.806419680Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 14:14:18.822514 env[1849]: time="2024-12-13T14:14:18.822423840Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:14:21.043458 env[1849]: time="2024-12-13T14:14:21.043392443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:21.048177 env[1849]: time="2024-12-13T14:14:21.048112439Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:21.053148 env[1849]: time="2024-12-13T14:14:21.051156359Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:21.055484 env[1849]: time="2024-12-13T14:14:21.055438127Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:21.057484 env[1849]: time="2024-12-13T14:14:21.057438083Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 14:14:21.076468 env[1849]: time="2024-12-13T14:14:21.076390800Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:14:22.487038 env[1849]: time="2024-12-13T14:14:22.486978111Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:22.489431 env[1849]: time="2024-12-13T14:14:22.489384087Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:22.492749 env[1849]: time="2024-12-13T14:14:22.492683595Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:22.496555 env[1849]: time="2024-12-13T14:14:22.496495383Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:22.498201 env[1849]: time="2024-12-13T14:14:22.498154875Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 14:14:22.514815 env[1849]: time="2024-12-13T14:14:22.514757703Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:14:23.854664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3698326028.mount: Deactivated successfully. Dec 13 14:14:24.639574 env[1849]: time="2024-12-13T14:14:24.639514049Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:24.642512 env[1849]: time="2024-12-13T14:14:24.642464909Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:24.644727 env[1849]: time="2024-12-13T14:14:24.644660621Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:24.647156 env[1849]: time="2024-12-13T14:14:24.647096189Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:24.648177 env[1849]: time="2024-12-13T14:14:24.648135845Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 14:14:24.664363 env[1849]: time="2024-12-13T14:14:24.664293641Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:14:25.178102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2010498918.mount: Deactivated successfully. Dec 13 14:14:27.758627 env[1849]: time="2024-12-13T14:14:27.758566845Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.764579 env[1849]: time="2024-12-13T14:14:27.764506701Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.768319 env[1849]: time="2024-12-13T14:14:27.768258369Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.771963 env[1849]: time="2024-12-13T14:14:27.771903561Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:27.773730 env[1849]: time="2024-12-13T14:14:27.773682501Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:14:27.789425 env[1849]: time="2024-12-13T14:14:27.789355281Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:14:28.324862 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:14:28.325180 systemd[1]: Stopped kubelet.service. Dec 13 14:14:28.328659 systemd[1]: Starting kubelet.service... Dec 13 14:14:28.352306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount557616869.mount: Deactivated successfully. Dec 13 14:14:28.373453 env[1849]: time="2024-12-13T14:14:28.373392704Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:28.380001 env[1849]: time="2024-12-13T14:14:28.379944104Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:28.385312 env[1849]: time="2024-12-13T14:14:28.385256432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:28.390482 env[1849]: time="2024-12-13T14:14:28.390425060Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:28.392053 env[1849]: time="2024-12-13T14:14:28.391991780Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 14:14:28.411734 env[1849]: time="2024-12-13T14:14:28.411680456Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:14:28.614035 systemd[1]: Started kubelet.service. Dec 13 14:14:28.702004 kubelet[2296]: E1213 14:14:28.701916 2296 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:28.707292 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:28.707679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:28.964915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3604480188.mount: Deactivated successfully. Dec 13 14:14:31.795087 env[1849]: time="2024-12-13T14:14:31.795007513Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:31.797692 env[1849]: time="2024-12-13T14:14:31.797631457Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:31.801286 env[1849]: time="2024-12-13T14:14:31.801228349Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:31.804908 env[1849]: time="2024-12-13T14:14:31.804855409Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:31.806673 env[1849]: time="2024-12-13T14:14:31.806611093Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 14:14:31.992853 amazon-ssm-agent[1810]: 2024-12-13 14:14:31 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 14:14:34.096566 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:14:38.813441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:14:38.813800 systemd[1]: Stopped kubelet.service. Dec 13 14:14:38.817110 systemd[1]: Starting kubelet.service... Dec 13 14:14:39.111546 systemd[1]: Started kubelet.service. Dec 13 14:14:39.224426 kubelet[2374]: E1213 14:14:39.224361 2374 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:39.228320 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:39.228697 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:41.135921 systemd[1]: Stopped kubelet.service. Dec 13 14:14:41.143850 systemd[1]: Starting kubelet.service... Dec 13 14:14:41.193278 systemd[1]: Reloading. Dec 13 14:14:41.363258 /usr/lib/systemd/system-generators/torcx-generator[2408]: time="2024-12-13T14:14:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:14:41.363966 /usr/lib/systemd/system-generators/torcx-generator[2408]: time="2024-12-13T14:14:41Z" level=info msg="torcx already run" Dec 13 14:14:41.575486 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:14:41.575743 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:14:41.618660 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:14:41.823306 systemd[1]: Started kubelet.service. Dec 13 14:14:41.828640 systemd[1]: Stopping kubelet.service... Dec 13 14:14:41.832259 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:14:41.833006 systemd[1]: Stopped kubelet.service. Dec 13 14:14:41.837930 systemd[1]: Starting kubelet.service... Dec 13 14:14:42.106620 systemd[1]: Started kubelet.service. Dec 13 14:14:42.202951 kubelet[2485]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:42.202951 kubelet[2485]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:14:42.203575 kubelet[2485]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:42.203575 kubelet[2485]: I1213 14:14:42.203105 2485 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:14:43.445253 kubelet[2485]: I1213 14:14:43.445194 2485 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:14:43.445253 kubelet[2485]: I1213 14:14:43.445246 2485 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:14:43.446502 kubelet[2485]: I1213 14:14:43.446461 2485 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:14:43.493830 kubelet[2485]: I1213 14:14:43.493791 2485 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:14:43.495154 kubelet[2485]: E1213 14:14:43.495097 2485 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.27.214:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:43.508426 kubelet[2485]: I1213 14:14:43.508383 2485 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:14:43.509185 kubelet[2485]: I1213 14:14:43.509155 2485 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:14:43.509500 kubelet[2485]: I1213 14:14:43.509467 2485 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:14:43.509673 kubelet[2485]: I1213 14:14:43.509512 2485 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:14:43.509673 kubelet[2485]: I1213 14:14:43.509534 2485 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:14:43.509795 kubelet[2485]: I1213 14:14:43.509720 2485 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:43.514477 kubelet[2485]: I1213 14:14:43.514443 2485 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:14:43.514589 kubelet[2485]: I1213 14:14:43.514489 2485 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:14:43.514589 kubelet[2485]: I1213 14:14:43.514532 2485 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:14:43.514589 kubelet[2485]: I1213 14:14:43.514563 2485 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:14:43.516022 kubelet[2485]: I1213 14:14:43.515990 2485 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:14:43.516684 kubelet[2485]: I1213 14:14:43.516658 2485 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:14:43.518148 kubelet[2485]: W1213 14:14:43.518115 2485 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:14:43.519374 kubelet[2485]: I1213 14:14:43.519342 2485 server.go:1256] "Started kubelet" Dec 13 14:14:43.519750 kubelet[2485]: W1213 14:14:43.519688 2485 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.27.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-214&limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:43.519923 kubelet[2485]: E1213 14:14:43.519902 2485 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-214&limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:43.520207 kubelet[2485]: W1213 14:14:43.520158 2485 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.27.214:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:43.520359 kubelet[2485]: E1213 14:14:43.520337 2485 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.214:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:43.535218 kubelet[2485]: I1213 14:14:43.535169 2485 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:14:43.535717 kubelet[2485]: I1213 14:14:43.535676 2485 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:14:43.541368 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:14:43.548649 kubelet[2485]: I1213 14:14:43.548605 2485 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:14:43.550150 kubelet[2485]: E1213 14:14:43.550118 2485 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.214:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.214:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-214.1810c21c576d7b22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-214,UID:ip-172-31-27-214,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-214,},FirstTimestamp:2024-12-13 14:14:43.51930653 +0000 UTC m=+1.400285216,LastTimestamp:2024-12-13 14:14:43.51930653 +0000 UTC m=+1.400285216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-214,}" Dec 13 14:14:43.552900 kubelet[2485]: I1213 14:14:43.552865 2485 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:14:43.554523 kubelet[2485]: I1213 14:14:43.554485 2485 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:14:43.555764 kubelet[2485]: I1213 14:14:43.555711 2485 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:14:43.559237 kubelet[2485]: E1213 14:14:43.559186 2485 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:14:43.566871 kubelet[2485]: I1213 14:14:43.566830 2485 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:14:43.567273 kubelet[2485]: I1213 14:14:43.567239 2485 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:14:43.568295 kubelet[2485]: W1213 14:14:43.568237 2485 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.27.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:43.568490 kubelet[2485]: E1213 14:14:43.568464 2485 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:43.571155 kubelet[2485]: I1213 14:14:43.571008 2485 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:14:43.572145 kubelet[2485]: E1213 14:14:43.572111 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-214?timeout=10s\": dial tcp 172.31.27.214:6443: connect: connection refused" interval="200ms" Dec 13 14:14:43.574031 kubelet[2485]: I1213 14:14:43.573997 2485 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:14:43.575979 kubelet[2485]: I1213 14:14:43.575927 2485 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:14:43.604716 kubelet[2485]: I1213 14:14:43.604682 2485 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:14:43.611930 kubelet[2485]: I1213 14:14:43.611894 2485 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:14:43.612149 kubelet[2485]: I1213 14:14:43.612128 2485 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:14:43.612299 kubelet[2485]: I1213 14:14:43.612278 2485 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:14:43.612534 kubelet[2485]: E1213 14:14:43.612511 2485 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:14:43.622456 kubelet[2485]: W1213 14:14:43.622380 2485 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.27.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:43.622635 kubelet[2485]: E1213 14:14:43.622470 2485 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:43.634956 kubelet[2485]: I1213 14:14:43.634918 2485 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:14:43.634956 kubelet[2485]: I1213 14:14:43.634955 2485 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:14:43.635221 kubelet[2485]: I1213 14:14:43.634989 2485 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:43.638522 kubelet[2485]: I1213 14:14:43.638481 2485 policy_none.go:49] "None policy: Start" Dec 13 14:14:43.639787 kubelet[2485]: I1213 14:14:43.639751 2485 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:14:43.639949 kubelet[2485]: I1213 14:14:43.639822 2485 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:14:43.649725 kubelet[2485]: I1213 14:14:43.649665 2485 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:14:43.650060 kubelet[2485]: I1213 14:14:43.650027 2485 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:14:43.656785 kubelet[2485]: E1213 14:14:43.656677 2485 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-214\" not found" Dec 13 14:14:43.660112 kubelet[2485]: I1213 14:14:43.660080 2485 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-214" Dec 13 14:14:43.660824 kubelet[2485]: E1213 14:14:43.660799 2485 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.214:6443/api/v1/nodes\": dial tcp 172.31.27.214:6443: connect: connection refused" node="ip-172-31-27-214" Dec 13 14:14:43.715828 kubelet[2485]: I1213 14:14:43.713234 2485 topology_manager.go:215] "Topology Admit Handler" podUID="bad8bdd6948cdc270b35915be05ccc8c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-214" Dec 13 14:14:43.716285 kubelet[2485]: I1213 14:14:43.716242 2485 topology_manager.go:215] "Topology Admit Handler" podUID="65c74af7e8e6e6623010e0a3542bbced" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-214" Dec 13 14:14:43.718294 kubelet[2485]: I1213 14:14:43.718246 2485 topology_manager.go:215] "Topology Admit Handler" podUID="c5f493ef2396b69800095f8ce4d9885e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-214" Dec 13 14:14:43.773726 kubelet[2485]: E1213 14:14:43.773683 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-214?timeout=10s\": dial tcp 172.31.27.214:6443: connect: connection refused" interval="400ms" Dec 13 14:14:43.777881 kubelet[2485]: I1213 14:14:43.777796 2485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bad8bdd6948cdc270b35915be05ccc8c-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-214\" (UID: \"bad8bdd6948cdc270b35915be05ccc8c\") " pod="kube-system/kube-controller-manager-ip-172-31-27-214" Dec 13 14:14:43.777881 kubelet[2485]: I1213 14:14:43.777863 2485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bad8bdd6948cdc270b35915be05ccc8c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-214\" (UID: \"bad8bdd6948cdc270b35915be05ccc8c\") " pod="kube-system/kube-controller-manager-ip-172-31-27-214" Dec 13 14:14:43.777881 kubelet[2485]: I1213 14:14:43.777913 2485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bad8bdd6948cdc270b35915be05ccc8c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-214\" (UID: \"bad8bdd6948cdc270b35915be05ccc8c\") " pod="kube-system/kube-controller-manager-ip-172-31-27-214" Dec 13 14:14:43.778385 kubelet[2485]: I1213 14:14:43.777965 2485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bad8bdd6948cdc270b35915be05ccc8c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-214\" (UID: \"bad8bdd6948cdc270b35915be05ccc8c\") " pod="kube-system/kube-controller-manager-ip-172-31-27-214" Dec 13 14:14:43.778385 kubelet[2485]: I1213 14:14:43.778011 2485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5f493ef2396b69800095f8ce4d9885e-ca-certs\") pod \"kube-apiserver-ip-172-31-27-214\" (UID: \"c5f493ef2396b69800095f8ce4d9885e\") " pod="kube-system/kube-apiserver-ip-172-31-27-214" Dec 13 14:14:43.778385 kubelet[2485]: I1213 14:14:43.778057 2485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5f493ef2396b69800095f8ce4d9885e-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-214\" (UID: \"c5f493ef2396b69800095f8ce4d9885e\") " pod="kube-system/kube-apiserver-ip-172-31-27-214" Dec 13 14:14:43.778385 kubelet[2485]: I1213 14:14:43.778146 2485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5f493ef2396b69800095f8ce4d9885e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-214\" (UID: \"c5f493ef2396b69800095f8ce4d9885e\") " pod="kube-system/kube-apiserver-ip-172-31-27-214" Dec 13 14:14:43.778385 kubelet[2485]: I1213 14:14:43.778206 2485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bad8bdd6948cdc270b35915be05ccc8c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-214\" (UID: \"bad8bdd6948cdc270b35915be05ccc8c\") " pod="kube-system/kube-controller-manager-ip-172-31-27-214" Dec 13 14:14:43.778724 kubelet[2485]: I1213 14:14:43.778251 2485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65c74af7e8e6e6623010e0a3542bbced-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-214\" (UID: \"65c74af7e8e6e6623010e0a3542bbced\") " pod="kube-system/kube-scheduler-ip-172-31-27-214" Dec 13 14:14:43.863978 kubelet[2485]: I1213 14:14:43.863946 2485 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-214" Dec 13 14:14:43.864623 kubelet[2485]: E1213 14:14:43.864595 2485 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.214:6443/api/v1/nodes\": dial tcp 172.31.27.214:6443: connect: connection refused" node="ip-172-31-27-214" Dec 13 14:14:44.025508 env[1849]: time="2024-12-13T14:14:44.025330910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-214,Uid:bad8bdd6948cdc270b35915be05ccc8c,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:44.030674 env[1849]: time="2024-12-13T14:14:44.030117192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-214,Uid:65c74af7e8e6e6623010e0a3542bbced,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:44.033825 env[1849]: time="2024-12-13T14:14:44.033761600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-214,Uid:c5f493ef2396b69800095f8ce4d9885e,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:44.174252 kubelet[2485]: E1213 14:14:44.174208 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-214?timeout=10s\": dial tcp 172.31.27.214:6443: connect: connection refused" interval="800ms" Dec 13 14:14:44.272848 kubelet[2485]: I1213 14:14:44.272808 2485 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-214" Dec 13 14:14:44.273536 kubelet[2485]: E1213 14:14:44.273508 2485 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.214:6443/api/v1/nodes\": dial tcp 172.31.27.214:6443: connect: connection refused" node="ip-172-31-27-214" Dec 13 14:14:44.415327 kubelet[2485]: W1213 14:14:44.415234 2485 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.27.214:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:44.415327 kubelet[2485]: E1213 14:14:44.415333 2485 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.214:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:44.424674 kubelet[2485]: W1213 14:14:44.424596 2485 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.27.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:44.424776 kubelet[2485]: E1213 14:14:44.424678 2485 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:44.558628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount256046152.mount: Deactivated successfully. Dec 13 14:14:44.574109 env[1849]: time="2024-12-13T14:14:44.574028524Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:44.583465 env[1849]: time="2024-12-13T14:14:44.583402872Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:44.585810 env[1849]: time="2024-12-13T14:14:44.585734189Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:44.588650 env[1849]: time="2024-12-13T14:14:44.588586307Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:44.599020 env[1849]: time="2024-12-13T14:14:44.598968790Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:44.601040 env[1849]: time="2024-12-13T14:14:44.600994874Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:44.603496 env[1849]: time="2024-12-13T14:14:44.603451195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:44.607609 env[1849]: time="2024-12-13T14:14:44.607562884Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:44.613193 env[1849]: time="2024-12-13T14:14:44.613136620Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:44.617233 env[1849]: time="2024-12-13T14:14:44.617162473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:44.623849 env[1849]: time="2024-12-13T14:14:44.623765559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:44.642253 env[1849]: time="2024-12-13T14:14:44.642195763Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:44.680585 env[1849]: time="2024-12-13T14:14:44.680353217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:44.680585 env[1849]: time="2024-12-13T14:14:44.680446817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:44.682003 env[1849]: time="2024-12-13T14:14:44.680474309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:44.682003 env[1849]: time="2024-12-13T14:14:44.681184458Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5ecbffa632b4a21cc96e913b155de926e9b485651ab9206c214cd1152ede1a3 pid=2523 runtime=io.containerd.runc.v2 Dec 13 14:14:44.706341 env[1849]: time="2024-12-13T14:14:44.706215672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:44.706479 env[1849]: time="2024-12-13T14:14:44.706367280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:44.706571 env[1849]: time="2024-12-13T14:14:44.706457377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:44.707727 env[1849]: time="2024-12-13T14:14:44.707614023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:44.707881 env[1849]: time="2024-12-13T14:14:44.707746095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:44.707881 env[1849]: time="2024-12-13T14:14:44.707808148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:44.708192 env[1849]: time="2024-12-13T14:14:44.708126820Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1992238b9401c6a8b98cbb18db765a2097d3e9227d4c88a1d9566e524bdc65a pid=2541 runtime=io.containerd.runc.v2 Dec 13 14:14:44.708613 env[1849]: time="2024-12-13T14:14:44.708486113Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5db3b65b73b51b39008232a89f3c3cb3ad74e9800a78be46d277f702e62f0704 pid=2559 runtime=io.containerd.runc.v2 Dec 13 14:14:44.867909 env[1849]: time="2024-12-13T14:14:44.867832847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-214,Uid:c5f493ef2396b69800095f8ce4d9885e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5ecbffa632b4a21cc96e913b155de926e9b485651ab9206c214cd1152ede1a3\"" Dec 13 14:14:44.874680 env[1849]: time="2024-12-13T14:14:44.874609598Z" level=info msg="CreateContainer within sandbox \"e5ecbffa632b4a21cc96e913b155de926e9b485651ab9206c214cd1152ede1a3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:14:44.886499 env[1849]: time="2024-12-13T14:14:44.886442919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-214,Uid:65c74af7e8e6e6623010e0a3542bbced,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1992238b9401c6a8b98cbb18db765a2097d3e9227d4c88a1d9566e524bdc65a\"" Dec 13 14:14:44.888785 kubelet[2485]: W1213 14:14:44.888700 2485 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.27.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-214&limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:44.890016 kubelet[2485]: E1213 14:14:44.888796 2485 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-214&limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:44.895698 env[1849]: time="2024-12-13T14:14:44.895643039Z" level=info msg="CreateContainer within sandbox \"a1992238b9401c6a8b98cbb18db765a2097d3e9227d4c88a1d9566e524bdc65a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:14:44.917878 env[1849]: time="2024-12-13T14:14:44.917822531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-214,Uid:bad8bdd6948cdc270b35915be05ccc8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5db3b65b73b51b39008232a89f3c3cb3ad74e9800a78be46d277f702e62f0704\"" Dec 13 14:14:44.924429 env[1849]: time="2024-12-13T14:14:44.924365773Z" level=info msg="CreateContainer within sandbox \"e5ecbffa632b4a21cc96e913b155de926e9b485651ab9206c214cd1152ede1a3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f57ff375b94706e0f87fd07f5240f5c1c164a053261546304820a60b27a9d718\"" Dec 13 14:14:44.924977 env[1849]: time="2024-12-13T14:14:44.924464305Z" level=info msg="CreateContainer within sandbox \"5db3b65b73b51b39008232a89f3c3cb3ad74e9800a78be46d277f702e62f0704\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:14:44.926000 env[1849]: time="2024-12-13T14:14:44.925938388Z" level=info msg="StartContainer for \"f57ff375b94706e0f87fd07f5240f5c1c164a053261546304820a60b27a9d718\"" Dec 13 14:14:44.933206 env[1849]: time="2024-12-13T14:14:44.933033703Z" level=info msg="CreateContainer within sandbox \"a1992238b9401c6a8b98cbb18db765a2097d3e9227d4c88a1d9566e524bdc65a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f1872f4034d3540370d1f356c5185ab7ee2726cea610e8ff326e29d8448813a9\"" Dec 13 14:14:44.935315 env[1849]: time="2024-12-13T14:14:44.935239980Z" level=info msg="StartContainer for \"f1872f4034d3540370d1f356c5185ab7ee2726cea610e8ff326e29d8448813a9\"" Dec 13 14:14:44.963785 env[1849]: time="2024-12-13T14:14:44.963711577Z" level=info msg="CreateContainer within sandbox \"5db3b65b73b51b39008232a89f3c3cb3ad74e9800a78be46d277f702e62f0704\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"855152d07892a305e94a98566c47a0529caed0e9495f243bee320692093cb997\"" Dec 13 14:14:44.964693 env[1849]: time="2024-12-13T14:14:44.964649979Z" level=info msg="StartContainer for \"855152d07892a305e94a98566c47a0529caed0e9495f243bee320692093cb997\"" Dec 13 14:14:44.978147 kubelet[2485]: E1213 14:14:44.975175 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-214?timeout=10s\": dial tcp 172.31.27.214:6443: connect: connection refused" interval="1.6s" Dec 13 14:14:45.078657 kubelet[2485]: I1213 14:14:45.078618 2485 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-214" Dec 13 14:14:45.079387 kubelet[2485]: E1213 14:14:45.079354 2485 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.214:6443/api/v1/nodes\": dial tcp 172.31.27.214:6443: connect: connection refused" node="ip-172-31-27-214" Dec 13 14:14:45.099863 kubelet[2485]: W1213 14:14:45.099701 2485 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.27.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:45.099863 kubelet[2485]: E1213 14:14:45.099785 2485 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.214:6443: connect: connection refused Dec 13 14:14:45.137060 env[1849]: time="2024-12-13T14:14:45.135783997Z" level=info msg="StartContainer for \"f57ff375b94706e0f87fd07f5240f5c1c164a053261546304820a60b27a9d718\" returns successfully" Dec 13 14:14:45.178351 env[1849]: time="2024-12-13T14:14:45.178291874Z" level=info msg="StartContainer for \"f1872f4034d3540370d1f356c5185ab7ee2726cea610e8ff326e29d8448813a9\" returns successfully" Dec 13 14:14:45.216032 env[1849]: time="2024-12-13T14:14:45.215889114Z" level=info msg="StartContainer for \"855152d07892a305e94a98566c47a0529caed0e9495f243bee320692093cb997\" returns successfully" Dec 13 14:14:46.681873 kubelet[2485]: I1213 14:14:46.681839 2485 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-214" Dec 13 14:14:49.024934 update_engine[1834]: I1213 14:14:49.024134 1834 update_attempter.cc:509] Updating boot flags... Dec 13 14:14:49.225122 kubelet[2485]: E1213 14:14:49.221239 2485 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-214\" not found" node="ip-172-31-27-214" Dec 13 14:14:49.296816 kubelet[2485]: I1213 14:14:49.296674 2485 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-214" Dec 13 14:14:49.524723 kubelet[2485]: I1213 14:14:49.522181 2485 apiserver.go:52] "Watching apiserver" Dec 13 14:14:49.558359 kubelet[2485]: I1213 14:14:49.558226 2485 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:14:52.824766 systemd[1]: Reloading. Dec 13 14:14:52.942585 /usr/lib/systemd/system-generators/torcx-generator[2955]: time="2024-12-13T14:14:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:14:52.943275 /usr/lib/systemd/system-generators/torcx-generator[2955]: time="2024-12-13T14:14:52Z" level=info msg="torcx already run" Dec 13 14:14:53.151297 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:14:53.151334 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:14:53.198402 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:14:53.416384 systemd[1]: Stopping kubelet.service... Dec 13 14:14:53.429786 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:14:53.430434 systemd[1]: Stopped kubelet.service. Dec 13 14:14:53.434234 systemd[1]: Starting kubelet.service... Dec 13 14:14:53.726334 systemd[1]: Started kubelet.service. Dec 13 14:14:53.857917 sudo[3034]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:14:53.859258 sudo[3034]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:14:53.870182 kubelet[3023]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:53.870182 kubelet[3023]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:14:53.870182 kubelet[3023]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:53.870852 kubelet[3023]: I1213 14:14:53.870291 3023 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:14:53.879286 kubelet[3023]: I1213 14:14:53.879233 3023 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:14:53.879286 kubelet[3023]: I1213 14:14:53.879280 3023 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:14:53.879701 kubelet[3023]: I1213 14:14:53.879669 3023 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:14:53.882988 kubelet[3023]: I1213 14:14:53.882942 3023 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:14:53.886190 kubelet[3023]: I1213 14:14:53.886137 3023 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:14:53.897103 kubelet[3023]: I1213 14:14:53.897043 3023 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:14:53.897964 kubelet[3023]: I1213 14:14:53.897926 3023 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:14:53.898259 kubelet[3023]: I1213 14:14:53.898224 3023 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:14:53.898433 kubelet[3023]: I1213 14:14:53.898268 3023 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:14:53.898433 kubelet[3023]: I1213 14:14:53.898291 3023 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:14:53.898433 kubelet[3023]: I1213 14:14:53.898346 3023 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:53.898639 kubelet[3023]: I1213 14:14:53.898515 3023 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:14:53.899987 kubelet[3023]: I1213 14:14:53.899939 3023 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:14:53.900145 kubelet[3023]: I1213 14:14:53.900021 3023 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:14:53.900145 kubelet[3023]: I1213 14:14:53.900057 3023 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:14:53.904047 kubelet[3023]: I1213 14:14:53.903985 3023 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:14:53.905444 kubelet[3023]: I1213 14:14:53.905401 3023 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:14:53.910114 kubelet[3023]: I1213 14:14:53.910039 3023 server.go:1256] "Started kubelet" Dec 13 14:14:53.927643 kubelet[3023]: I1213 14:14:53.927596 3023 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:14:53.929403 kubelet[3023]: I1213 14:14:53.929343 3023 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:14:53.937727 kubelet[3023]: I1213 14:14:53.937364 3023 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:14:53.937727 kubelet[3023]: I1213 14:14:53.937717 3023 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:14:53.957117 kubelet[3023]: I1213 14:14:53.952531 3023 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:14:53.961125 kubelet[3023]: I1213 14:14:53.961050 3023 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:14:53.969143 kubelet[3023]: I1213 14:14:53.969054 3023 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:14:53.971360 kubelet[3023]: I1213 14:14:53.971296 3023 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:14:53.992464 kubelet[3023]: I1213 14:14:53.992360 3023 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:14:53.994472 kubelet[3023]: I1213 14:14:53.994420 3023 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:14:54.019511 kubelet[3023]: E1213 14:14:54.019357 3023 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:14:54.031603 kubelet[3023]: I1213 14:14:54.031500 3023 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:14:54.057174 kubelet[3023]: I1213 14:14:54.057125 3023 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:14:54.074615 kubelet[3023]: I1213 14:14:54.074558 3023 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:14:54.074615 kubelet[3023]: I1213 14:14:54.074605 3023 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:14:54.074892 kubelet[3023]: I1213 14:14:54.074638 3023 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:14:54.074892 kubelet[3023]: E1213 14:14:54.074721 3023 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:14:54.080953 kubelet[3023]: I1213 14:14:54.080901 3023 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-214" Dec 13 14:14:54.123809 kubelet[3023]: I1213 14:14:54.123759 3023 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-27-214" Dec 13 14:14:54.124053 kubelet[3023]: I1213 14:14:54.123932 3023 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-214" Dec 13 14:14:54.174893 kubelet[3023]: E1213 14:14:54.174856 3023 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:14:54.270849 kubelet[3023]: I1213 14:14:54.270112 3023 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:14:54.271156 kubelet[3023]: I1213 14:14:54.271056 3023 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:14:54.271282 kubelet[3023]: I1213 14:14:54.271262 3023 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:54.271630 kubelet[3023]: I1213 14:14:54.271608 3023 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:14:54.271779 kubelet[3023]: I1213 14:14:54.271758 3023 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:14:54.271892 kubelet[3023]: I1213 14:14:54.271872 3023 policy_none.go:49] "None policy: Start" Dec 13 14:14:54.273324 kubelet[3023]: I1213 14:14:54.273292 3023 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:14:54.273548 kubelet[3023]: I1213 14:14:54.273525 3023 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:14:54.273934 kubelet[3023]: I1213 14:14:54.273911 3023 state_mem.go:75] "Updated machine memory state" Dec 13 14:14:54.276788 kubelet[3023]: I1213 14:14:54.276754 3023 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:14:54.283164 kubelet[3023]: I1213 14:14:54.282680 3023 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:14:54.375411 kubelet[3023]: I1213 14:14:54.375347 3023 topology_manager.go:215] "Topology Admit Handler" podUID="65c74af7e8e6e6623010e0a3542bbced" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-214" Dec 13 14:14:54.375577 kubelet[3023]: I1213 14:14:54.375494 3023 topology_manager.go:215] "Topology Admit Handler" podUID="c5f493ef2396b69800095f8ce4d9885e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-214" Dec 13 14:14:54.375667 kubelet[3023]: I1213 14:14:54.375590 3023 topology_manager.go:215] "Topology Admit Handler" podUID="bad8bdd6948cdc270b35915be05ccc8c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-214" Dec 13 14:14:54.393140 kubelet[3023]: E1213 14:14:54.393102 3023 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-27-214\" already exists" pod="kube-system/kube-scheduler-ip-172-31-27-214" Dec 13 14:14:54.482851 kubelet[3023]: I1213 14:14:54.482811 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bad8bdd6948cdc270b35915be05ccc8c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-214\" (UID: \"bad8bdd6948cdc270b35915be05ccc8c\") " pod="kube-system/kube-controller-manager-ip-172-31-27-214" Dec 13 14:14:54.483256 kubelet[3023]: I1213 14:14:54.483232 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bad8bdd6948cdc270b35915be05ccc8c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-214\" (UID: \"bad8bdd6948cdc270b35915be05ccc8c\") " pod="kube-system/kube-controller-manager-ip-172-31-27-214" Dec 13 14:14:54.483458 kubelet[3023]: I1213 14:14:54.483403 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bad8bdd6948cdc270b35915be05ccc8c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-214\" (UID: \"bad8bdd6948cdc270b35915be05ccc8c\") " pod="kube-system/kube-controller-manager-ip-172-31-27-214" Dec 13 14:14:54.483638 kubelet[3023]: I1213 14:14:54.483616 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65c74af7e8e6e6623010e0a3542bbced-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-214\" (UID: \"65c74af7e8e6e6623010e0a3542bbced\") " pod="kube-system/kube-scheduler-ip-172-31-27-214" Dec 13 14:14:54.483798 kubelet[3023]: I1213 14:14:54.483777 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5f493ef2396b69800095f8ce4d9885e-ca-certs\") pod \"kube-apiserver-ip-172-31-27-214\" (UID: \"c5f493ef2396b69800095f8ce4d9885e\") " pod="kube-system/kube-apiserver-ip-172-31-27-214" Dec 13 14:14:54.483950 kubelet[3023]: I1213 14:14:54.483929 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5f493ef2396b69800095f8ce4d9885e-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-214\" (UID: \"c5f493ef2396b69800095f8ce4d9885e\") " pod="kube-system/kube-apiserver-ip-172-31-27-214" Dec 13 14:14:54.484126 kubelet[3023]: I1213 14:14:54.484104 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5f493ef2396b69800095f8ce4d9885e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-214\" (UID: \"c5f493ef2396b69800095f8ce4d9885e\") " pod="kube-system/kube-apiserver-ip-172-31-27-214" Dec 13 14:14:54.484305 kubelet[3023]: I1213 14:14:54.484286 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bad8bdd6948cdc270b35915be05ccc8c-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-214\" (UID: \"bad8bdd6948cdc270b35915be05ccc8c\") " pod="kube-system/kube-controller-manager-ip-172-31-27-214" Dec 13 14:14:54.484465 kubelet[3023]: I1213 14:14:54.484443 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bad8bdd6948cdc270b35915be05ccc8c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-214\" (UID: \"bad8bdd6948cdc270b35915be05ccc8c\") " pod="kube-system/kube-controller-manager-ip-172-31-27-214" Dec 13 14:14:54.932142 kubelet[3023]: I1213 14:14:54.932095 3023 apiserver.go:52] "Watching apiserver" Dec 13 14:14:54.968286 sudo[3034]: pam_unix(sudo:session): session closed for user root Dec 13 14:14:54.970955 kubelet[3023]: I1213 14:14:54.970912 3023 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:14:55.209572 kubelet[3023]: I1213 14:14:55.209418 3023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-214" podStartSLOduration=1.209309141 podStartE2EDuration="1.209309141s" podCreationTimestamp="2024-12-13 14:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:55.192958764 +0000 UTC m=+1.443663616" watchObservedRunningTime="2024-12-13 14:14:55.209309141 +0000 UTC m=+1.460013981" Dec 13 14:14:55.225863 kubelet[3023]: I1213 14:14:55.225809 3023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-214" podStartSLOduration=1.225728699 podStartE2EDuration="1.225728699s" podCreationTimestamp="2024-12-13 14:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:55.209308577 +0000 UTC m=+1.460013489" watchObservedRunningTime="2024-12-13 14:14:55.225728699 +0000 UTC m=+1.476433539" Dec 13 14:14:55.226389 kubelet[3023]: I1213 14:14:55.226359 3023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-214" podStartSLOduration=5.226286711 podStartE2EDuration="5.226286711s" podCreationTimestamp="2024-12-13 14:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:55.224763694 +0000 UTC m=+1.475468534" watchObservedRunningTime="2024-12-13 14:14:55.226286711 +0000 UTC m=+1.476991551" Dec 13 14:14:57.321381 sudo[2105]: pam_unix(sudo:session): session closed for user root Dec 13 14:14:57.345448 sshd[2101]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:57.351129 systemd[1]: sshd@4-172.31.27.214:22-139.178.89.65:57142.service: Deactivated successfully. Dec 13 14:14:57.352133 systemd-logind[1832]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:14:57.354840 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:14:57.357640 systemd-logind[1832]: Removed session 5. Dec 13 14:15:02.018627 amazon-ssm-agent[1810]: 2024-12-13 14:15:02 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 14:15:05.883699 kubelet[3023]: I1213 14:15:05.883651 3023 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:15:05.884510 env[1849]: time="2024-12-13T14:15:05.884447352Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:15:05.885350 kubelet[3023]: I1213 14:15:05.885315 3023 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:15:06.840764 kubelet[3023]: I1213 14:15:06.840706 3023 topology_manager.go:215] "Topology Admit Handler" podUID="7eb50ffe-24d3-4312-9a5a-85c469d740d8" podNamespace="kube-system" podName="kube-proxy-q49qs" Dec 13 14:15:06.857934 kubelet[3023]: I1213 14:15:06.857893 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7eb50ffe-24d3-4312-9a5a-85c469d740d8-lib-modules\") pod \"kube-proxy-q49qs\" (UID: \"7eb50ffe-24d3-4312-9a5a-85c469d740d8\") " pod="kube-system/kube-proxy-q49qs" Dec 13 14:15:06.858259 kubelet[3023]: I1213 14:15:06.858224 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7eb50ffe-24d3-4312-9a5a-85c469d740d8-kube-proxy\") pod \"kube-proxy-q49qs\" (UID: \"7eb50ffe-24d3-4312-9a5a-85c469d740d8\") " pod="kube-system/kube-proxy-q49qs" Dec 13 14:15:06.858481 kubelet[3023]: I1213 14:15:06.858443 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7eb50ffe-24d3-4312-9a5a-85c469d740d8-xtables-lock\") pod \"kube-proxy-q49qs\" (UID: \"7eb50ffe-24d3-4312-9a5a-85c469d740d8\") " pod="kube-system/kube-proxy-q49qs" Dec 13 14:15:06.858721 kubelet[3023]: I1213 14:15:06.858693 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkklg\" (UniqueName: \"kubernetes.io/projected/7eb50ffe-24d3-4312-9a5a-85c469d740d8-kube-api-access-wkklg\") pod \"kube-proxy-q49qs\" (UID: \"7eb50ffe-24d3-4312-9a5a-85c469d740d8\") " pod="kube-system/kube-proxy-q49qs" Dec 13 14:15:06.862010 kubelet[3023]: I1213 14:15:06.861953 3023 topology_manager.go:215] "Topology Admit Handler" podUID="d8cae5bd-3f45-41e1-bc85-fad499e98dc9" podNamespace="kube-system" podName="cilium-t4vnt" Dec 13 14:15:06.959927 kubelet[3023]: I1213 14:15:06.959869 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-hostproc\") pod \"cilium-t4vnt\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " pod="kube-system/cilium-t4vnt" Dec 13 14:15:06.960561 kubelet[3023]: I1213 14:15:06.959953 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-etc-cni-netd\") pod \"cilium-t4vnt\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " pod="kube-system/cilium-t4vnt" Dec 13 14:15:06.960561 kubelet[3023]: I1213 14:15:06.960002 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-lib-modules\") pod \"cilium-t4vnt\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " pod="kube-system/cilium-t4vnt" Dec 13 14:15:06.960561 kubelet[3023]: I1213 14:15:06.960047 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-xtables-lock\") pod \"cilium-t4vnt\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " pod="kube-system/cilium-t4vnt" Dec 13 14:15:06.960561 kubelet[3023]: I1213 14:15:06.960125 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-clustermesh-secrets\") pod \"cilium-t4vnt\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " pod="kube-system/cilium-t4vnt" Dec 13 14:15:06.960561 kubelet[3023]: I1213 14:15:06.960215 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cni-path\") pod \"cilium-t4vnt\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " pod="kube-system/cilium-t4vnt" Dec 13 14:15:06.960561 kubelet[3023]: I1213 14:15:06.960264 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cilium-run\") pod \"cilium-t4vnt\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " pod="kube-system/cilium-t4vnt" Dec 13 14:15:06.960909 kubelet[3023]: I1213 14:15:06.960309 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cilium-config-path\") pod \"cilium-t4vnt\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " pod="kube-system/cilium-t4vnt" Dec 13 14:15:06.960909 kubelet[3023]: I1213 14:15:06.960353 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-host-proc-sys-kernel\") pod \"cilium-t4vnt\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " pod="kube-system/cilium-t4vnt" Dec 13 14:15:06.960909 kubelet[3023]: I1213 14:15:06.960397 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snmsn\" (UniqueName: \"kubernetes.io/projected/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-kube-api-access-snmsn\") pod \"cilium-t4vnt\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " pod="kube-system/cilium-t4vnt" Dec 13 14:15:06.960909 kubelet[3023]: I1213 14:15:06.960444 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-bpf-maps\") pod \"cilium-t4vnt\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " pod="kube-system/cilium-t4vnt" Dec 13 14:15:06.960909 kubelet[3023]: I1213 14:15:06.960488 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cilium-cgroup\") pod \"cilium-t4vnt\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " pod="kube-system/cilium-t4vnt" Dec 13 14:15:06.960909 kubelet[3023]: I1213 14:15:06.960531 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-hubble-tls\") pod \"cilium-t4vnt\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " pod="kube-system/cilium-t4vnt" Dec 13 14:15:06.961305 kubelet[3023]: I1213 14:15:06.960583 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-host-proc-sys-net\") pod \"cilium-t4vnt\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " pod="kube-system/cilium-t4vnt" Dec 13 14:15:07.062269 kubelet[3023]: I1213 14:15:07.062211 3023 topology_manager.go:215] "Topology Admit Handler" podUID="83d721e2-4439-42e1-abfe-8f2d2ce74d4e" podNamespace="kube-system" podName="cilium-operator-5cc964979-zln6c" Dec 13 14:15:07.158517 env[1849]: time="2024-12-13T14:15:07.158363397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q49qs,Uid:7eb50ffe-24d3-4312-9a5a-85c469d740d8,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:07.163883 kubelet[3023]: I1213 14:15:07.163835 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4gh7\" (UniqueName: \"kubernetes.io/projected/83d721e2-4439-42e1-abfe-8f2d2ce74d4e-kube-api-access-l4gh7\") pod \"cilium-operator-5cc964979-zln6c\" (UID: \"83d721e2-4439-42e1-abfe-8f2d2ce74d4e\") " pod="kube-system/cilium-operator-5cc964979-zln6c" Dec 13 14:15:07.164199 kubelet[3023]: I1213 14:15:07.164166 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83d721e2-4439-42e1-abfe-8f2d2ce74d4e-cilium-config-path\") pod \"cilium-operator-5cc964979-zln6c\" (UID: \"83d721e2-4439-42e1-abfe-8f2d2ce74d4e\") " pod="kube-system/cilium-operator-5cc964979-zln6c" Dec 13 14:15:07.194462 env[1849]: time="2024-12-13T14:15:07.194398239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t4vnt,Uid:d8cae5bd-3f45-41e1-bc85-fad499e98dc9,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:07.197196 env[1849]: time="2024-12-13T14:15:07.197044540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:07.197196 env[1849]: time="2024-12-13T14:15:07.197157256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:07.198138 env[1849]: time="2024-12-13T14:15:07.197568748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:07.198289 env[1849]: time="2024-12-13T14:15:07.198225329Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f395f774d566eb5bbf750fa8122fe8e7595ed50259fbc99e1d5bc988952de685 pid=3106 runtime=io.containerd.runc.v2 Dec 13 14:15:07.224415 env[1849]: time="2024-12-13T14:15:07.223982345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:07.224415 env[1849]: time="2024-12-13T14:15:07.224248229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:07.224415 env[1849]: time="2024-12-13T14:15:07.224351477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:07.229396 env[1849]: time="2024-12-13T14:15:07.229287464Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88 pid=3131 runtime=io.containerd.runc.v2 Dec 13 14:15:07.317696 env[1849]: time="2024-12-13T14:15:07.317613327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q49qs,Uid:7eb50ffe-24d3-4312-9a5a-85c469d740d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f395f774d566eb5bbf750fa8122fe8e7595ed50259fbc99e1d5bc988952de685\"" Dec 13 14:15:07.324871 env[1849]: time="2024-12-13T14:15:07.324810199Z" level=info msg="CreateContainer within sandbox \"f395f774d566eb5bbf750fa8122fe8e7595ed50259fbc99e1d5bc988952de685\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:15:07.364628 env[1849]: time="2024-12-13T14:15:07.364547306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t4vnt,Uid:d8cae5bd-3f45-41e1-bc85-fad499e98dc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88\"" Dec 13 14:15:07.371533 env[1849]: time="2024-12-13T14:15:07.371464122Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:15:07.375476 env[1849]: time="2024-12-13T14:15:07.375390763Z" level=info msg="CreateContainer within sandbox \"f395f774d566eb5bbf750fa8122fe8e7595ed50259fbc99e1d5bc988952de685\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f97ea051817e7fc37fba32787fd0f11b6674d9e9491afe19c5b87102f85fbb92\"" Dec 13 14:15:07.376911 env[1849]: time="2024-12-13T14:15:07.376822280Z" level=info msg="StartContainer for \"f97ea051817e7fc37fba32787fd0f11b6674d9e9491afe19c5b87102f85fbb92\"" Dec 13 14:15:07.401666 env[1849]: time="2024-12-13T14:15:07.401600768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-zln6c,Uid:83d721e2-4439-42e1-abfe-8f2d2ce74d4e,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:07.428699 env[1849]: time="2024-12-13T14:15:07.428468769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:07.429813 env[1849]: time="2024-12-13T14:15:07.428545846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:07.429813 env[1849]: time="2024-12-13T14:15:07.428888854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:07.430034 env[1849]: time="2024-12-13T14:15:07.429965770Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1385dd6545468053ec922c05a1ec856745d58cd0920e9760f88a3988d5c7f251 pid=3207 runtime=io.containerd.runc.v2 Dec 13 14:15:07.533940 env[1849]: time="2024-12-13T14:15:07.531675432Z" level=info msg="StartContainer for \"f97ea051817e7fc37fba32787fd0f11b6674d9e9491afe19c5b87102f85fbb92\" returns successfully" Dec 13 14:15:07.561007 env[1849]: time="2024-12-13T14:15:07.557947129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-zln6c,Uid:83d721e2-4439-42e1-abfe-8f2d2ce74d4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1385dd6545468053ec922c05a1ec856745d58cd0920e9760f88a3988d5c7f251\"" Dec 13 14:15:08.208908 kubelet[3023]: I1213 14:15:08.208851 3023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-q49qs" podStartSLOduration=2.208794238 podStartE2EDuration="2.208794238s" podCreationTimestamp="2024-12-13 14:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:15:08.208345294 +0000 UTC m=+14.459050158" watchObservedRunningTime="2024-12-13 14:15:08.208794238 +0000 UTC m=+14.459499078" Dec 13 14:15:15.689616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2206619074.mount: Deactivated successfully. Dec 13 14:15:19.786536 env[1849]: time="2024-12-13T14:15:19.786452032Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:19.791898 env[1849]: time="2024-12-13T14:15:19.791805725Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:19.798789 env[1849]: time="2024-12-13T14:15:19.798723151Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:19.800265 env[1849]: time="2024-12-13T14:15:19.800218807Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 14:15:19.804213 env[1849]: time="2024-12-13T14:15:19.804161048Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:15:19.809453 env[1849]: time="2024-12-13T14:15:19.809388573Z" level=info msg="CreateContainer within sandbox \"6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:15:19.842368 env[1849]: time="2024-12-13T14:15:19.842308097Z" level=info msg="CreateContainer within sandbox \"6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def\"" Dec 13 14:15:19.845254 env[1849]: time="2024-12-13T14:15:19.844971641Z" level=info msg="StartContainer for \"9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def\"" Dec 13 14:15:19.956515 env[1849]: time="2024-12-13T14:15:19.956452147Z" level=info msg="StartContainer for \"9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def\" returns successfully" Dec 13 14:15:20.624605 env[1849]: time="2024-12-13T14:15:20.624520539Z" level=info msg="shim disconnected" id=9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def Dec 13 14:15:20.624605 env[1849]: time="2024-12-13T14:15:20.624591567Z" level=warning msg="cleaning up after shim disconnected" id=9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def namespace=k8s.io Dec 13 14:15:20.624942 env[1849]: time="2024-12-13T14:15:20.624615219Z" level=info msg="cleaning up dead shim" Dec 13 14:15:20.638306 env[1849]: time="2024-12-13T14:15:20.638238990Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3429 runtime=io.containerd.runc.v2\n" Dec 13 14:15:20.824552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def-rootfs.mount: Deactivated successfully. Dec 13 14:15:21.248109 env[1849]: time="2024-12-13T14:15:21.244198809Z" level=info msg="CreateContainer within sandbox \"6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:15:21.293820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1046347518.mount: Deactivated successfully. Dec 13 14:15:21.320685 env[1849]: time="2024-12-13T14:15:21.320625612Z" level=info msg="CreateContainer within sandbox \"6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c\"" Dec 13 14:15:21.323911 env[1849]: time="2024-12-13T14:15:21.323835733Z" level=info msg="StartContainer for \"f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c\"" Dec 13 14:15:21.455960 env[1849]: time="2024-12-13T14:15:21.455872552Z" level=info msg="StartContainer for \"f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c\" returns successfully" Dec 13 14:15:21.462616 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:15:21.463749 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:15:21.467743 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:15:21.471956 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:15:21.495385 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:15:21.515248 env[1849]: time="2024-12-13T14:15:21.515108608Z" level=info msg="shim disconnected" id=f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c Dec 13 14:15:21.515619 env[1849]: time="2024-12-13T14:15:21.515582908Z" level=warning msg="cleaning up after shim disconnected" id=f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c namespace=k8s.io Dec 13 14:15:21.515749 env[1849]: time="2024-12-13T14:15:21.515719852Z" level=info msg="cleaning up dead shim" Dec 13 14:15:21.531707 env[1849]: time="2024-12-13T14:15:21.531652183Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3492 runtime=io.containerd.runc.v2\n" Dec 13 14:15:21.826249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c-rootfs.mount: Deactivated successfully. Dec 13 14:15:22.259559 env[1849]: time="2024-12-13T14:15:22.259375863Z" level=info msg="CreateContainer within sandbox \"6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:15:22.323734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2784106809.mount: Deactivated successfully. Dec 13 14:15:22.340425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651019983.mount: Deactivated successfully. Dec 13 14:15:22.369587 env[1849]: time="2024-12-13T14:15:22.369487379Z" level=info msg="CreateContainer within sandbox \"6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1\"" Dec 13 14:15:22.373279 env[1849]: time="2024-12-13T14:15:22.373219920Z" level=info msg="StartContainer for \"36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1\"" Dec 13 14:15:22.519126 env[1849]: time="2024-12-13T14:15:22.518923708Z" level=info msg="StartContainer for \"36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1\" returns successfully" Dec 13 14:15:22.585667 env[1849]: time="2024-12-13T14:15:22.585603976Z" level=info msg="shim disconnected" id=36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1 Dec 13 14:15:22.586162 env[1849]: time="2024-12-13T14:15:22.586125797Z" level=warning msg="cleaning up after shim disconnected" id=36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1 namespace=k8s.io Dec 13 14:15:22.586302 env[1849]: time="2024-12-13T14:15:22.586273745Z" level=info msg="cleaning up dead shim" Dec 13 14:15:22.603729 env[1849]: time="2024-12-13T14:15:22.603674684Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3549 runtime=io.containerd.runc.v2\n" Dec 13 14:15:23.270095 env[1849]: time="2024-12-13T14:15:23.258024845Z" level=info msg="CreateContainer within sandbox \"6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:15:23.303688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1718174411.mount: Deactivated successfully. Dec 13 14:15:23.317016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1544317421.mount: Deactivated successfully. Dec 13 14:15:23.336892 env[1849]: time="2024-12-13T14:15:23.336805027Z" level=info msg="CreateContainer within sandbox \"6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef\"" Dec 13 14:15:23.339451 env[1849]: time="2024-12-13T14:15:23.338112619Z" level=info msg="StartContainer for \"59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef\"" Dec 13 14:15:23.380325 env[1849]: time="2024-12-13T14:15:23.380264271Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:23.387169 env[1849]: time="2024-12-13T14:15:23.387109180Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:23.399323 env[1849]: time="2024-12-13T14:15:23.399265758Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:23.402398 env[1849]: time="2024-12-13T14:15:23.401008890Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 14:15:23.408471 env[1849]: time="2024-12-13T14:15:23.408408968Z" level=info msg="CreateContainer within sandbox \"1385dd6545468053ec922c05a1ec856745d58cd0920e9760f88a3988d5c7f251\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:15:23.434822 env[1849]: time="2024-12-13T14:15:23.434733156Z" level=info msg="CreateContainer within sandbox \"1385dd6545468053ec922c05a1ec856745d58cd0920e9760f88a3988d5c7f251\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff\"" Dec 13 14:15:23.438670 env[1849]: time="2024-12-13T14:15:23.438613693Z" level=info msg="StartContainer for \"f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff\"" Dec 13 14:15:23.456528 env[1849]: time="2024-12-13T14:15:23.456464104Z" level=info msg="StartContainer for \"59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef\" returns successfully" Dec 13 14:15:23.571949 env[1849]: time="2024-12-13T14:15:23.571861153Z" level=info msg="shim disconnected" id=59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef Dec 13 14:15:23.572354 env[1849]: time="2024-12-13T14:15:23.572319325Z" level=warning msg="cleaning up after shim disconnected" id=59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef namespace=k8s.io Dec 13 14:15:23.579122 env[1849]: time="2024-12-13T14:15:23.577394954Z" level=info msg="cleaning up dead shim" Dec 13 14:15:23.620186 env[1849]: time="2024-12-13T14:15:23.620107737Z" level=info msg="StartContainer for \"f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff\" returns successfully" Dec 13 14:15:23.623936 env[1849]: time="2024-12-13T14:15:23.623698654Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3632 runtime=io.containerd.runc.v2\n" Dec 13 14:15:24.270448 env[1849]: time="2024-12-13T14:15:24.270361234Z" level=info msg="CreateContainer within sandbox \"6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:15:24.335391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2928681199.mount: Deactivated successfully. Dec 13 14:15:24.354423 env[1849]: time="2024-12-13T14:15:24.354343068Z" level=info msg="CreateContainer within sandbox \"6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215\"" Dec 13 14:15:24.355533 env[1849]: time="2024-12-13T14:15:24.355461588Z" level=info msg="StartContainer for \"95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215\"" Dec 13 14:15:24.619249 kubelet[3023]: I1213 14:15:24.619189 3023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-zln6c" podStartSLOduration=1.778786684 podStartE2EDuration="17.619120136s" podCreationTimestamp="2024-12-13 14:15:07 +0000 UTC" firstStartedPulling="2024-12-13 14:15:07.562590687 +0000 UTC m=+13.813295527" lastFinishedPulling="2024-12-13 14:15:23.402924151 +0000 UTC m=+29.653628979" observedRunningTime="2024-12-13 14:15:24.468226327 +0000 UTC m=+30.718931263" watchObservedRunningTime="2024-12-13 14:15:24.619120136 +0000 UTC m=+30.869825000" Dec 13 14:15:24.676124 env[1849]: time="2024-12-13T14:15:24.676026162Z" level=info msg="StartContainer for \"95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215\" returns successfully" Dec 13 14:15:25.023121 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:15:25.062695 kubelet[3023]: I1213 14:15:25.062326 3023 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:15:25.342490 kubelet[3023]: I1213 14:15:25.342333 3023 topology_manager.go:215] "Topology Admit Handler" podUID="6ba36529-3b61-4c6b-a2e4-81b2e4f17e45" podNamespace="kube-system" podName="coredns-76f75df574-5xx6n" Dec 13 14:15:25.347249 kubelet[3023]: I1213 14:15:25.347189 3023 topology_manager.go:215] "Topology Admit Handler" podUID="b119bb2e-6e83-40d4-aca2-0bcaf7b47f4f" podNamespace="kube-system" podName="coredns-76f75df574-7wxbj" Dec 13 14:15:25.466472 kubelet[3023]: I1213 14:15:25.466411 3023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-t4vnt" podStartSLOduration=7.035608483 podStartE2EDuration="19.466353345s" podCreationTimestamp="2024-12-13 14:15:06 +0000 UTC" firstStartedPulling="2024-12-13 14:15:07.370108685 +0000 UTC m=+13.620813561" lastFinishedPulling="2024-12-13 14:15:19.800853535 +0000 UTC m=+26.051558423" observedRunningTime="2024-12-13 14:15:25.426058911 +0000 UTC m=+31.676763775" watchObservedRunningTime="2024-12-13 14:15:25.466353345 +0000 UTC m=+31.717058197" Dec 13 14:15:25.488384 kubelet[3023]: I1213 14:15:25.488319 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b119bb2e-6e83-40d4-aca2-0bcaf7b47f4f-config-volume\") pod \"coredns-76f75df574-7wxbj\" (UID: \"b119bb2e-6e83-40d4-aca2-0bcaf7b47f4f\") " pod="kube-system/coredns-76f75df574-7wxbj" Dec 13 14:15:25.488576 kubelet[3023]: I1213 14:15:25.488407 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k6ll\" (UniqueName: \"kubernetes.io/projected/6ba36529-3b61-4c6b-a2e4-81b2e4f17e45-kube-api-access-8k6ll\") pod \"coredns-76f75df574-5xx6n\" (UID: \"6ba36529-3b61-4c6b-a2e4-81b2e4f17e45\") " pod="kube-system/coredns-76f75df574-5xx6n" Dec 13 14:15:25.488576 kubelet[3023]: I1213 14:15:25.488458 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbzzp\" (UniqueName: \"kubernetes.io/projected/b119bb2e-6e83-40d4-aca2-0bcaf7b47f4f-kube-api-access-mbzzp\") pod \"coredns-76f75df574-7wxbj\" (UID: \"b119bb2e-6e83-40d4-aca2-0bcaf7b47f4f\") " pod="kube-system/coredns-76f75df574-7wxbj" Dec 13 14:15:25.488576 kubelet[3023]: I1213 14:15:25.488520 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ba36529-3b61-4c6b-a2e4-81b2e4f17e45-config-volume\") pod \"coredns-76f75df574-5xx6n\" (UID: \"6ba36529-3b61-4c6b-a2e4-81b2e4f17e45\") " pod="kube-system/coredns-76f75df574-5xx6n" Dec 13 14:15:25.665214 env[1849]: time="2024-12-13T14:15:25.664630120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5xx6n,Uid:6ba36529-3b61-4c6b-a2e4-81b2e4f17e45,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:25.678052 env[1849]: time="2024-12-13T14:15:25.677897790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7wxbj,Uid:b119bb2e-6e83-40d4-aca2-0bcaf7b47f4f,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:26.234124 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:15:28.134261 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:15:28.134408 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:15:28.138247 (udev-worker)[3796]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:28.139722 systemd-networkd[1512]: cilium_host: Link UP Dec 13 14:15:28.140149 systemd-networkd[1512]: cilium_net: Link UP Dec 13 14:15:28.140537 systemd-networkd[1512]: cilium_net: Gained carrier Dec 13 14:15:28.140943 systemd-networkd[1512]: cilium_host: Gained carrier Dec 13 14:15:28.146551 (udev-worker)[3761]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:28.334386 systemd-networkd[1512]: cilium_vxlan: Link UP Dec 13 14:15:28.334406 systemd-networkd[1512]: cilium_vxlan: Gained carrier Dec 13 14:15:28.520739 systemd-networkd[1512]: cilium_net: Gained IPv6LL Dec 13 14:15:28.824290 systemd-networkd[1512]: cilium_host: Gained IPv6LL Dec 13 14:15:28.833119 kernel: NET: Registered PF_ALG protocol family Dec 13 14:15:30.168281 systemd-networkd[1512]: cilium_vxlan: Gained IPv6LL Dec 13 14:15:30.327341 systemd-networkd[1512]: lxc_health: Link UP Dec 13 14:15:30.331735 (udev-worker)[3822]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:30.336565 systemd-networkd[1512]: lxc_health: Gained carrier Dec 13 14:15:30.337219 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:15:30.870160 kernel: eth0: renamed from tmp8520a Dec 13 14:15:30.879037 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc388fb1ae8be5: link becomes ready Dec 13 14:15:30.865127 systemd-networkd[1512]: lxc388fb1ae8be5: Link UP Dec 13 14:15:30.879763 systemd-networkd[1512]: lxc388fb1ae8be5: Gained carrier Dec 13 14:15:30.890612 systemd-networkd[1512]: lxc8ec51c22572e: Link UP Dec 13 14:15:30.925232 kernel: eth0: renamed from tmpb3195 Dec 13 14:15:30.939859 (udev-worker)[3823]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:30.969347 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8ec51c22572e: link becomes ready Dec 13 14:15:30.969515 systemd-networkd[1512]: lxc8ec51c22572e: Gained carrier Dec 13 14:15:32.216429 systemd-networkd[1512]: lxc8ec51c22572e: Gained IPv6LL Dec 13 14:15:32.216871 systemd-networkd[1512]: lxc_health: Gained IPv6LL Dec 13 14:15:32.409358 systemd-networkd[1512]: lxc388fb1ae8be5: Gained IPv6LL Dec 13 14:15:39.546109 env[1849]: time="2024-12-13T14:15:39.545962883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:39.546948 env[1849]: time="2024-12-13T14:15:39.546842675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:39.547228 env[1849]: time="2024-12-13T14:15:39.547148412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:39.547762 env[1849]: time="2024-12-13T14:15:39.547693476Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8520a3bbb9e4cbfab9bf4f18bbed0b368ea15433bf6e2e7f55c017762dd48d7a pid=4173 runtime=io.containerd.runc.v2 Dec 13 14:15:39.638402 systemd[1]: run-containerd-runc-k8s.io-8520a3bbb9e4cbfab9bf4f18bbed0b368ea15433bf6e2e7f55c017762dd48d7a-runc.VrsSFl.mount: Deactivated successfully. Dec 13 14:15:39.657562 env[1849]: time="2024-12-13T14:15:39.657438177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:39.657719 env[1849]: time="2024-12-13T14:15:39.657577341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:39.657817 env[1849]: time="2024-12-13T14:15:39.657706005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:39.659121 env[1849]: time="2024-12-13T14:15:39.658477689Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b31957abf82830cfb77e23bfae5aa93f3e59bbb85c863164e476cb4df7d108e4 pid=4200 runtime=io.containerd.runc.v2 Dec 13 14:15:39.824046 env[1849]: time="2024-12-13T14:15:39.823962401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5xx6n,Uid:6ba36529-3b61-4c6b-a2e4-81b2e4f17e45,Namespace:kube-system,Attempt:0,} returns sandbox id \"8520a3bbb9e4cbfab9bf4f18bbed0b368ea15433bf6e2e7f55c017762dd48d7a\"" Dec 13 14:15:39.833693 env[1849]: time="2024-12-13T14:15:39.833623571Z" level=info msg="CreateContainer within sandbox \"8520a3bbb9e4cbfab9bf4f18bbed0b368ea15433bf6e2e7f55c017762dd48d7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:15:39.870673 env[1849]: time="2024-12-13T14:15:39.870612610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7wxbj,Uid:b119bb2e-6e83-40d4-aca2-0bcaf7b47f4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b31957abf82830cfb77e23bfae5aa93f3e59bbb85c863164e476cb4df7d108e4\"" Dec 13 14:15:39.871407 env[1849]: time="2024-12-13T14:15:39.871206731Z" level=info msg="CreateContainer within sandbox \"8520a3bbb9e4cbfab9bf4f18bbed0b368ea15433bf6e2e7f55c017762dd48d7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b56e7149cf22334115c888bd740a42f95594368a87fc1a8bec352979aeded9f\"" Dec 13 14:15:39.880675 env[1849]: time="2024-12-13T14:15:39.880540456Z" level=info msg="CreateContainer within sandbox \"b31957abf82830cfb77e23bfae5aa93f3e59bbb85c863164e476cb4df7d108e4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:15:39.883331 env[1849]: time="2024-12-13T14:15:39.882852090Z" level=info msg="StartContainer for \"2b56e7149cf22334115c888bd740a42f95594368a87fc1a8bec352979aeded9f\"" Dec 13 14:15:39.917053 env[1849]: time="2024-12-13T14:15:39.916972359Z" level=info msg="CreateContainer within sandbox \"b31957abf82830cfb77e23bfae5aa93f3e59bbb85c863164e476cb4df7d108e4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95fd6fda9af014ab2a336c848c5b7e53b679aab006ae8f27b2128c7d93c6fc40\"" Dec 13 14:15:39.920647 env[1849]: time="2024-12-13T14:15:39.920580161Z" level=info msg="StartContainer for \"95fd6fda9af014ab2a336c848c5b7e53b679aab006ae8f27b2128c7d93c6fc40\"" Dec 13 14:15:40.053179 env[1849]: time="2024-12-13T14:15:40.053113675Z" level=info msg="StartContainer for \"2b56e7149cf22334115c888bd740a42f95594368a87fc1a8bec352979aeded9f\" returns successfully" Dec 13 14:15:40.129682 env[1849]: time="2024-12-13T14:15:40.129543860Z" level=info msg="StartContainer for \"95fd6fda9af014ab2a336c848c5b7e53b679aab006ae8f27b2128c7d93c6fc40\" returns successfully" Dec 13 14:15:40.358919 kubelet[3023]: I1213 14:15:40.358851 3023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5xx6n" podStartSLOduration=33.358765404 podStartE2EDuration="33.358765404s" podCreationTimestamp="2024-12-13 14:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:15:40.340844622 +0000 UTC m=+46.591549462" watchObservedRunningTime="2024-12-13 14:15:40.358765404 +0000 UTC m=+46.609470256" Dec 13 14:15:40.392351 kubelet[3023]: I1213 14:15:40.392218 3023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7wxbj" podStartSLOduration=33.392129172 podStartE2EDuration="33.392129172s" podCreationTimestamp="2024-12-13 14:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:15:40.389609459 +0000 UTC m=+46.640314299" watchObservedRunningTime="2024-12-13 14:15:40.392129172 +0000 UTC m=+46.642834036" Dec 13 14:15:40.572511 systemd[1]: run-containerd-runc-k8s.io-b31957abf82830cfb77e23bfae5aa93f3e59bbb85c863164e476cb4df7d108e4-runc.R9MHmq.mount: Deactivated successfully. Dec 13 14:15:40.858427 systemd[1]: Started sshd@5-172.31.27.214:22-139.178.89.65:44962.service. Dec 13 14:15:41.036890 sshd[4325]: Accepted publickey for core from 139.178.89.65 port 44962 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:41.039997 sshd[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:41.049966 systemd[1]: Started session-6.scope. Dec 13 14:15:41.050479 systemd-logind[1832]: New session 6 of user core. Dec 13 14:15:41.328045 sshd[4325]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:41.336522 systemd[1]: sshd@5-172.31.27.214:22-139.178.89.65:44962.service: Deactivated successfully. Dec 13 14:15:41.338824 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:15:41.339031 systemd-logind[1832]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:15:41.342216 systemd-logind[1832]: Removed session 6. Dec 13 14:15:46.354682 systemd[1]: Started sshd@6-172.31.27.214:22-139.178.89.65:44968.service. Dec 13 14:15:46.540271 sshd[4343]: Accepted publickey for core from 139.178.89.65 port 44968 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:46.543670 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:46.553671 systemd-logind[1832]: New session 7 of user core. Dec 13 14:15:46.554530 systemd[1]: Started session-7.scope. Dec 13 14:15:46.838361 sshd[4343]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:46.844727 systemd[1]: sshd@6-172.31.27.214:22-139.178.89.65:44968.service: Deactivated successfully. Dec 13 14:15:46.846616 systemd-logind[1832]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:15:46.848346 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:15:46.851299 systemd-logind[1832]: Removed session 7. Dec 13 14:15:51.864917 systemd[1]: Started sshd@7-172.31.27.214:22-139.178.89.65:37732.service. Dec 13 14:15:52.039384 sshd[4357]: Accepted publickey for core from 139.178.89.65 port 37732 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:52.041918 sshd[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:52.050996 systemd[1]: Started session-8.scope. Dec 13 14:15:52.051648 systemd-logind[1832]: New session 8 of user core. Dec 13 14:15:52.302176 sshd[4357]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:52.307432 systemd[1]: sshd@7-172.31.27.214:22-139.178.89.65:37732.service: Deactivated successfully. Dec 13 14:15:52.309653 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:15:52.310252 systemd-logind[1832]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:15:52.313018 systemd-logind[1832]: Removed session 8. Dec 13 14:15:57.328523 systemd[1]: Started sshd@8-172.31.27.214:22-139.178.89.65:37748.service. Dec 13 14:15:57.503119 sshd[4373]: Accepted publickey for core from 139.178.89.65 port 37748 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:57.506012 sshd[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:57.515253 systemd-logind[1832]: New session 9 of user core. Dec 13 14:15:57.516014 systemd[1]: Started session-9.scope. Dec 13 14:15:57.764424 sshd[4373]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:57.769971 systemd[1]: sshd@8-172.31.27.214:22-139.178.89.65:37748.service: Deactivated successfully. Dec 13 14:15:57.773121 systemd-logind[1832]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:15:57.773326 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:15:57.776128 systemd-logind[1832]: Removed session 9. Dec 13 14:16:02.791435 systemd[1]: Started sshd@9-172.31.27.214:22-139.178.89.65:34164.service. Dec 13 14:16:02.968801 sshd[4387]: Accepted publickey for core from 139.178.89.65 port 34164 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:02.972283 sshd[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:02.981172 systemd-logind[1832]: New session 10 of user core. Dec 13 14:16:02.981829 systemd[1]: Started session-10.scope. Dec 13 14:16:03.233458 sshd[4387]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:03.239310 systemd[1]: sshd@9-172.31.27.214:22-139.178.89.65:34164.service: Deactivated successfully. Dec 13 14:16:03.241657 systemd-logind[1832]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:16:03.241815 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:16:03.244833 systemd-logind[1832]: Removed session 10. Dec 13 14:16:03.259364 systemd[1]: Started sshd@10-172.31.27.214:22-139.178.89.65:34168.service. Dec 13 14:16:03.435396 sshd[4401]: Accepted publickey for core from 139.178.89.65 port 34168 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:03.438145 sshd[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:03.448192 systemd-logind[1832]: New session 11 of user core. Dec 13 14:16:03.448445 systemd[1]: Started session-11.scope. Dec 13 14:16:03.784810 sshd[4401]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:03.793208 systemd[1]: sshd@10-172.31.27.214:22-139.178.89.65:34168.service: Deactivated successfully. Dec 13 14:16:03.794700 systemd-logind[1832]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:16:03.796799 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:16:03.798802 systemd-logind[1832]: Removed session 11. Dec 13 14:16:03.811518 systemd[1]: Started sshd@11-172.31.27.214:22-139.178.89.65:34180.service. Dec 13 14:16:03.999186 sshd[4412]: Accepted publickey for core from 139.178.89.65 port 34180 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:04.002505 sshd[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:04.011237 systemd-logind[1832]: New session 12 of user core. Dec 13 14:16:04.011831 systemd[1]: Started session-12.scope. Dec 13 14:16:04.259146 sshd[4412]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:04.264719 systemd[1]: sshd@11-172.31.27.214:22-139.178.89.65:34180.service: Deactivated successfully. Dec 13 14:16:04.266899 systemd-logind[1832]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:16:04.267029 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:16:04.269847 systemd-logind[1832]: Removed session 12. Dec 13 14:16:09.285627 systemd[1]: Started sshd@12-172.31.27.214:22-139.178.89.65:59378.service. Dec 13 14:16:09.459662 sshd[4429]: Accepted publickey for core from 139.178.89.65 port 59378 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:09.463000 sshd[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:09.471854 systemd-logind[1832]: New session 13 of user core. Dec 13 14:16:09.472272 systemd[1]: Started session-13.scope. Dec 13 14:16:09.736040 sshd[4429]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:09.741377 systemd-logind[1832]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:16:09.742206 systemd[1]: sshd@12-172.31.27.214:22-139.178.89.65:59378.service: Deactivated successfully. Dec 13 14:16:09.743763 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:16:09.747805 systemd-logind[1832]: Removed session 13. Dec 13 14:16:14.762303 systemd[1]: Started sshd@13-172.31.27.214:22-139.178.89.65:59388.service. Dec 13 14:16:14.937829 sshd[4442]: Accepted publickey for core from 139.178.89.65 port 59388 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:14.941447 sshd[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:14.954760 systemd[1]: Started session-14.scope. Dec 13 14:16:14.957192 systemd-logind[1832]: New session 14 of user core. Dec 13 14:16:15.217252 sshd[4442]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:15.222060 systemd[1]: sshd@13-172.31.27.214:22-139.178.89.65:59388.service: Deactivated successfully. Dec 13 14:16:15.224781 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:16:15.225403 systemd-logind[1832]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:16:15.228459 systemd-logind[1832]: Removed session 14. Dec 13 14:16:20.243137 systemd[1]: Started sshd@14-172.31.27.214:22-139.178.89.65:47654.service. Dec 13 14:16:20.414861 sshd[4455]: Accepted publickey for core from 139.178.89.65 port 47654 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:20.417613 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:20.425558 systemd-logind[1832]: New session 15 of user core. Dec 13 14:16:20.427063 systemd[1]: Started session-15.scope. Dec 13 14:16:20.682952 sshd[4455]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:20.688511 systemd-logind[1832]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:16:20.689673 systemd[1]: sshd@14-172.31.27.214:22-139.178.89.65:47654.service: Deactivated successfully. Dec 13 14:16:20.691759 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:16:20.692982 systemd-logind[1832]: Removed session 15. Dec 13 14:16:25.709906 systemd[1]: Started sshd@15-172.31.27.214:22-139.178.89.65:47662.service. Dec 13 14:16:25.891732 sshd[4468]: Accepted publickey for core from 139.178.89.65 port 47662 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:25.894743 sshd[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:25.905000 systemd-logind[1832]: New session 16 of user core. Dec 13 14:16:25.907309 systemd[1]: Started session-16.scope. Dec 13 14:16:26.157868 sshd[4468]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:26.163851 systemd-logind[1832]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:16:26.164274 systemd[1]: sshd@15-172.31.27.214:22-139.178.89.65:47662.service: Deactivated successfully. Dec 13 14:16:26.166423 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:16:26.168713 systemd-logind[1832]: Removed session 16. Dec 13 14:16:26.185198 systemd[1]: Started sshd@16-172.31.27.214:22-139.178.89.65:47672.service. Dec 13 14:16:26.361397 sshd[4481]: Accepted publickey for core from 139.178.89.65 port 47672 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:26.363260 sshd[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:26.371394 systemd-logind[1832]: New session 17 of user core. Dec 13 14:16:26.372945 systemd[1]: Started session-17.scope. Dec 13 14:16:26.698543 sshd[4481]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:26.704311 systemd[1]: sshd@16-172.31.27.214:22-139.178.89.65:47672.service: Deactivated successfully. Dec 13 14:16:26.706314 systemd-logind[1832]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:16:26.706826 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:16:26.708891 systemd-logind[1832]: Removed session 17. Dec 13 14:16:26.729008 systemd[1]: Started sshd@17-172.31.27.214:22-139.178.89.65:47684.service. Dec 13 14:16:26.907293 sshd[4492]: Accepted publickey for core from 139.178.89.65 port 47684 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:26.909982 sshd[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:26.919019 systemd-logind[1832]: New session 18 of user core. Dec 13 14:16:26.920367 systemd[1]: Started session-18.scope. Dec 13 14:16:29.632054 sshd[4492]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:29.637914 systemd[1]: sshd@17-172.31.27.214:22-139.178.89.65:47684.service: Deactivated successfully. Dec 13 14:16:29.640536 systemd-logind[1832]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:16:29.640699 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:16:29.643486 systemd-logind[1832]: Removed session 18. Dec 13 14:16:29.659733 systemd[1]: Started sshd@18-172.31.27.214:22-139.178.89.65:44686.service. Dec 13 14:16:29.857385 sshd[4509]: Accepted publickey for core from 139.178.89.65 port 44686 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:29.860037 sshd[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:29.867267 systemd-logind[1832]: New session 19 of user core. Dec 13 14:16:29.869287 systemd[1]: Started session-19.scope. Dec 13 14:16:30.423402 sshd[4509]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:30.430178 systemd[1]: sshd@18-172.31.27.214:22-139.178.89.65:44686.service: Deactivated successfully. Dec 13 14:16:30.431733 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:16:30.432885 systemd-logind[1832]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:16:30.436413 systemd-logind[1832]: Removed session 19. Dec 13 14:16:30.451349 systemd[1]: Started sshd@19-172.31.27.214:22-139.178.89.65:44700.service. Dec 13 14:16:30.625990 sshd[4520]: Accepted publickey for core from 139.178.89.65 port 44700 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:30.627260 sshd[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:30.636218 systemd-logind[1832]: New session 20 of user core. Dec 13 14:16:30.637587 systemd[1]: Started session-20.scope. Dec 13 14:16:30.899144 sshd[4520]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:30.905174 systemd-logind[1832]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:16:30.906664 systemd[1]: sshd@19-172.31.27.214:22-139.178.89.65:44700.service: Deactivated successfully. Dec 13 14:16:30.909580 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:16:30.912923 systemd-logind[1832]: Removed session 20. Dec 13 14:16:35.924528 systemd[1]: Started sshd@20-172.31.27.214:22-139.178.89.65:44706.service. Dec 13 14:16:36.097103 sshd[4533]: Accepted publickey for core from 139.178.89.65 port 44706 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:36.099852 sshd[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:36.109282 systemd-logind[1832]: New session 21 of user core. Dec 13 14:16:36.110965 systemd[1]: Started session-21.scope. Dec 13 14:16:36.369482 sshd[4533]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:36.375810 systemd-logind[1832]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:16:36.376483 systemd[1]: sshd@20-172.31.27.214:22-139.178.89.65:44706.service: Deactivated successfully. Dec 13 14:16:36.379104 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:16:36.382335 systemd-logind[1832]: Removed session 21. Dec 13 14:16:41.396426 systemd[1]: Started sshd@21-172.31.27.214:22-139.178.89.65:60384.service. Dec 13 14:16:41.575775 sshd[4551]: Accepted publickey for core from 139.178.89.65 port 60384 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:41.579044 sshd[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:41.592256 systemd-logind[1832]: New session 22 of user core. Dec 13 14:16:41.593660 systemd[1]: Started session-22.scope. Dec 13 14:16:41.846439 sshd[4551]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:41.852472 systemd[1]: sshd@21-172.31.27.214:22-139.178.89.65:60384.service: Deactivated successfully. Dec 13 14:16:41.855233 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:16:41.856334 systemd-logind[1832]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:16:41.859835 systemd-logind[1832]: Removed session 22. Dec 13 14:16:46.874041 systemd[1]: Started sshd@22-172.31.27.214:22-139.178.89.65:60392.service. Dec 13 14:16:47.048846 sshd[4564]: Accepted publickey for core from 139.178.89.65 port 60392 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:47.051851 sshd[4564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:47.062185 systemd-logind[1832]: New session 23 of user core. Dec 13 14:16:47.063901 systemd[1]: Started session-23.scope. Dec 13 14:16:47.334346 sshd[4564]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:47.340768 systemd[1]: sshd@22-172.31.27.214:22-139.178.89.65:60392.service: Deactivated successfully. Dec 13 14:16:47.343890 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:16:47.344646 systemd-logind[1832]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:16:47.349058 systemd-logind[1832]: Removed session 23. Dec 13 14:16:52.362887 systemd[1]: Started sshd@23-172.31.27.214:22-139.178.89.65:55688.service. Dec 13 14:16:52.545265 sshd[4576]: Accepted publickey for core from 139.178.89.65 port 55688 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:52.548445 sshd[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:52.559783 systemd[1]: Started session-24.scope. Dec 13 14:16:52.560363 systemd-logind[1832]: New session 24 of user core. Dec 13 14:16:52.833323 sshd[4576]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:52.838749 systemd[1]: sshd@23-172.31.27.214:22-139.178.89.65:55688.service: Deactivated successfully. Dec 13 14:16:52.840734 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:16:52.844678 systemd-logind[1832]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:16:52.848300 systemd-logind[1832]: Removed session 24. Dec 13 14:16:52.859323 systemd[1]: Started sshd@24-172.31.27.214:22-139.178.89.65:55690.service. Dec 13 14:16:53.040050 sshd[4589]: Accepted publickey for core from 139.178.89.65 port 55690 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:53.042782 sshd[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:53.051668 systemd-logind[1832]: New session 25 of user core. Dec 13 14:16:53.052961 systemd[1]: Started session-25.scope. Dec 13 14:16:56.384980 env[1849]: time="2024-12-13T14:16:56.383443584Z" level=info msg="StopContainer for \"f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff\" with timeout 30 (s)" Dec 13 14:16:56.387242 env[1849]: time="2024-12-13T14:16:56.387170236Z" level=info msg="Stop container \"f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff\" with signal terminated" Dec 13 14:16:56.419771 systemd[1]: run-containerd-runc-k8s.io-95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215-runc.TCowG7.mount: Deactivated successfully. Dec 13 14:16:56.457637 env[1849]: time="2024-12-13T14:16:56.457457304Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:16:56.470689 env[1849]: time="2024-12-13T14:16:56.470635908Z" level=info msg="StopContainer for \"95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215\" with timeout 2 (s)" Dec 13 14:16:56.471552 env[1849]: time="2024-12-13T14:16:56.471498961Z" level=info msg="Stop container \"95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215\" with signal terminated" Dec 13 14:16:56.480796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff-rootfs.mount: Deactivated successfully. Dec 13 14:16:56.498889 systemd-networkd[1512]: lxc_health: Link DOWN Dec 13 14:16:56.498908 systemd-networkd[1512]: lxc_health: Lost carrier Dec 13 14:16:56.527992 env[1849]: time="2024-12-13T14:16:56.527912708Z" level=info msg="shim disconnected" id=f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff Dec 13 14:16:56.528659 env[1849]: time="2024-12-13T14:16:56.527997632Z" level=warning msg="cleaning up after shim disconnected" id=f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff namespace=k8s.io Dec 13 14:16:56.528659 env[1849]: time="2024-12-13T14:16:56.528020336Z" level=info msg="cleaning up dead shim" Dec 13 14:16:56.564003 env[1849]: time="2024-12-13T14:16:56.563922931Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4650 runtime=io.containerd.runc.v2\n" Dec 13 14:16:56.572388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215-rootfs.mount: Deactivated successfully. Dec 13 14:16:56.583583 env[1849]: time="2024-12-13T14:16:56.577860032Z" level=info msg="StopContainer for \"f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff\" returns successfully" Dec 13 14:16:56.584640 env[1849]: time="2024-12-13T14:16:56.584546414Z" level=info msg="StopPodSandbox for \"1385dd6545468053ec922c05a1ec856745d58cd0920e9760f88a3988d5c7f251\"" Dec 13 14:16:56.585225 env[1849]: time="2024-12-13T14:16:56.585136335Z" level=info msg="Container to stop \"f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:56.591401 env[1849]: time="2024-12-13T14:16:56.591328137Z" level=info msg="shim disconnected" id=95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215 Dec 13 14:16:56.591669 env[1849]: time="2024-12-13T14:16:56.591400581Z" level=warning msg="cleaning up after shim disconnected" id=95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215 namespace=k8s.io Dec 13 14:16:56.591669 env[1849]: time="2024-12-13T14:16:56.591424113Z" level=info msg="cleaning up dead shim" Dec 13 14:16:56.628755 env[1849]: time="2024-12-13T14:16:56.628674261Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4677 runtime=io.containerd.runc.v2\n" Dec 13 14:16:56.633032 env[1849]: time="2024-12-13T14:16:56.632964337Z" level=info msg="StopContainer for \"95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215\" returns successfully" Dec 13 14:16:56.633971 env[1849]: time="2024-12-13T14:16:56.633915158Z" level=info msg="StopPodSandbox for \"6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88\"" Dec 13 14:16:56.634583 env[1849]: time="2024-12-13T14:16:56.634523307Z" level=info msg="Container to stop \"9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:56.634801 env[1849]: time="2024-12-13T14:16:56.634761063Z" level=info msg="Container to stop \"f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:56.635111 env[1849]: time="2024-12-13T14:16:56.634954095Z" level=info msg="Container to stop \"59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:56.635401 env[1849]: time="2024-12-13T14:16:56.635310819Z" level=info msg="Container to stop \"95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:56.635631 env[1849]: time="2024-12-13T14:16:56.635565616Z" level=info msg="Container to stop \"36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:56.672550 env[1849]: time="2024-12-13T14:16:56.672451443Z" level=info msg="shim disconnected" id=1385dd6545468053ec922c05a1ec856745d58cd0920e9760f88a3988d5c7f251 Dec 13 14:16:56.672802 env[1849]: time="2024-12-13T14:16:56.672548511Z" level=warning msg="cleaning up after shim disconnected" id=1385dd6545468053ec922c05a1ec856745d58cd0920e9760f88a3988d5c7f251 namespace=k8s.io Dec 13 14:16:56.672802 env[1849]: time="2024-12-13T14:16:56.672572991Z" level=info msg="cleaning up dead shim" Dec 13 14:16:56.700932 env[1849]: time="2024-12-13T14:16:56.700664539Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4718 runtime=io.containerd.runc.v2\n" Dec 13 14:16:56.702330 env[1849]: time="2024-12-13T14:16:56.701999132Z" level=info msg="TearDown network for sandbox \"1385dd6545468053ec922c05a1ec856745d58cd0920e9760f88a3988d5c7f251\" successfully" Dec 13 14:16:56.702330 env[1849]: time="2024-12-13T14:16:56.702169364Z" level=info msg="StopPodSandbox for \"1385dd6545468053ec922c05a1ec856745d58cd0920e9760f88a3988d5c7f251\" returns successfully" Dec 13 14:16:56.723915 env[1849]: time="2024-12-13T14:16:56.723843833Z" level=info msg="shim disconnected" id=6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88 Dec 13 14:16:56.724452 env[1849]: time="2024-12-13T14:16:56.724375433Z" level=warning msg="cleaning up after shim disconnected" id=6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88 namespace=k8s.io Dec 13 14:16:56.724708 env[1849]: time="2024-12-13T14:16:56.724667910Z" level=info msg="cleaning up dead shim" Dec 13 14:16:56.745265 env[1849]: time="2024-12-13T14:16:56.745207694Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4745 runtime=io.containerd.runc.v2\n" Dec 13 14:16:56.746044 env[1849]: time="2024-12-13T14:16:56.745991138Z" level=info msg="TearDown network for sandbox \"6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88\" successfully" Dec 13 14:16:56.746306 env[1849]: time="2024-12-13T14:16:56.746268771Z" level=info msg="StopPodSandbox for \"6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88\" returns successfully" Dec 13 14:16:56.834731 kubelet[3023]: I1213 14:16:56.834684 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-lib-modules\") pod \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " Dec 13 14:16:56.835617 kubelet[3023]: I1213 14:16:56.834790 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d8cae5bd-3f45-41e1-bc85-fad499e98dc9" (UID: "d8cae5bd-3f45-41e1-bc85-fad499e98dc9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:56.835769 kubelet[3023]: I1213 14:16:56.835574 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-bpf-maps\") pod \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " Dec 13 14:16:56.835769 kubelet[3023]: I1213 14:16:56.835688 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-etc-cni-netd\") pod \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " Dec 13 14:16:56.835769 kubelet[3023]: I1213 14:16:56.835735 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-xtables-lock\") pod \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " Dec 13 14:16:56.835968 kubelet[3023]: I1213 14:16:56.835788 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-clustermesh-secrets\") pod \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " Dec 13 14:16:56.835968 kubelet[3023]: I1213 14:16:56.835836 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cilium-config-path\") pod \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " Dec 13 14:16:56.835968 kubelet[3023]: I1213 14:16:56.835880 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-hostproc\") pod \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " Dec 13 14:16:56.835968 kubelet[3023]: I1213 14:16:56.835928 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-host-proc-sys-kernel\") pod \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " Dec 13 14:16:56.836277 kubelet[3023]: I1213 14:16:56.835972 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cilium-cgroup\") pod \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " Dec 13 14:16:56.836277 kubelet[3023]: I1213 14:16:56.836020 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-hubble-tls\") pod \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " Dec 13 14:16:56.836277 kubelet[3023]: I1213 14:16:56.836121 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83d721e2-4439-42e1-abfe-8f2d2ce74d4e-cilium-config-path\") pod \"83d721e2-4439-42e1-abfe-8f2d2ce74d4e\" (UID: \"83d721e2-4439-42e1-abfe-8f2d2ce74d4e\") " Dec 13 14:16:56.836277 kubelet[3023]: I1213 14:16:56.836175 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cilium-run\") pod \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " Dec 13 14:16:56.836277 kubelet[3023]: I1213 14:16:56.836221 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4gh7\" (UniqueName: \"kubernetes.io/projected/83d721e2-4439-42e1-abfe-8f2d2ce74d4e-kube-api-access-l4gh7\") pod \"83d721e2-4439-42e1-abfe-8f2d2ce74d4e\" (UID: \"83d721e2-4439-42e1-abfe-8f2d2ce74d4e\") " Dec 13 14:16:56.836277 kubelet[3023]: I1213 14:16:56.836263 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cni-path\") pod \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " Dec 13 14:16:56.836687 kubelet[3023]: I1213 14:16:56.836304 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-host-proc-sys-net\") pod \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " Dec 13 14:16:56.836687 kubelet[3023]: I1213 14:16:56.836353 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snmsn\" (UniqueName: \"kubernetes.io/projected/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-kube-api-access-snmsn\") pod \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\" (UID: \"d8cae5bd-3f45-41e1-bc85-fad499e98dc9\") " Dec 13 14:16:56.836687 kubelet[3023]: I1213 14:16:56.836425 3023 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-lib-modules\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:56.836985 kubelet[3023]: I1213 14:16:56.836932 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d8cae5bd-3f45-41e1-bc85-fad499e98dc9" (UID: "d8cae5bd-3f45-41e1-bc85-fad499e98dc9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:56.837307 kubelet[3023]: I1213 14:16:56.837263 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d8cae5bd-3f45-41e1-bc85-fad499e98dc9" (UID: "d8cae5bd-3f45-41e1-bc85-fad499e98dc9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:56.837545 kubelet[3023]: I1213 14:16:56.837507 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d8cae5bd-3f45-41e1-bc85-fad499e98dc9" (UID: "d8cae5bd-3f45-41e1-bc85-fad499e98dc9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:56.837796 kubelet[3023]: I1213 14:16:56.837753 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d8cae5bd-3f45-41e1-bc85-fad499e98dc9" (UID: "d8cae5bd-3f45-41e1-bc85-fad499e98dc9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:56.843025 kubelet[3023]: I1213 14:16:56.842947 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-kube-api-access-snmsn" (OuterVolumeSpecName: "kube-api-access-snmsn") pod "d8cae5bd-3f45-41e1-bc85-fad499e98dc9" (UID: "d8cae5bd-3f45-41e1-bc85-fad499e98dc9"). InnerVolumeSpecName "kube-api-access-snmsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:16:56.843434 kubelet[3023]: I1213 14:16:56.843391 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d8cae5bd-3f45-41e1-bc85-fad499e98dc9" (UID: "d8cae5bd-3f45-41e1-bc85-fad499e98dc9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:16:56.849280 kubelet[3023]: I1213 14:16:56.849039 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d8cae5bd-3f45-41e1-bc85-fad499e98dc9" (UID: "d8cae5bd-3f45-41e1-bc85-fad499e98dc9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:16:56.849581 kubelet[3023]: I1213 14:16:56.849525 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d8cae5bd-3f45-41e1-bc85-fad499e98dc9" (UID: "d8cae5bd-3f45-41e1-bc85-fad499e98dc9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:16:56.849799 kubelet[3023]: I1213 14:16:56.849765 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-hostproc" (OuterVolumeSpecName: "hostproc") pod "d8cae5bd-3f45-41e1-bc85-fad499e98dc9" (UID: "d8cae5bd-3f45-41e1-bc85-fad499e98dc9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:56.850123 kubelet[3023]: I1213 14:16:56.850061 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d8cae5bd-3f45-41e1-bc85-fad499e98dc9" (UID: "d8cae5bd-3f45-41e1-bc85-fad499e98dc9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:56.855716 kubelet[3023]: I1213 14:16:56.855633 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83d721e2-4439-42e1-abfe-8f2d2ce74d4e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "83d721e2-4439-42e1-abfe-8f2d2ce74d4e" (UID: "83d721e2-4439-42e1-abfe-8f2d2ce74d4e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:16:56.855888 kubelet[3023]: I1213 14:16:56.855738 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cni-path" (OuterVolumeSpecName: "cni-path") pod "d8cae5bd-3f45-41e1-bc85-fad499e98dc9" (UID: "d8cae5bd-3f45-41e1-bc85-fad499e98dc9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:56.855888 kubelet[3023]: I1213 14:16:56.855783 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d8cae5bd-3f45-41e1-bc85-fad499e98dc9" (UID: "d8cae5bd-3f45-41e1-bc85-fad499e98dc9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:56.855888 kubelet[3023]: I1213 14:16:56.855830 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d8cae5bd-3f45-41e1-bc85-fad499e98dc9" (UID: "d8cae5bd-3f45-41e1-bc85-fad499e98dc9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:56.857016 kubelet[3023]: I1213 14:16:56.856966 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d721e2-4439-42e1-abfe-8f2d2ce74d4e-kube-api-access-l4gh7" (OuterVolumeSpecName: "kube-api-access-l4gh7") pod "83d721e2-4439-42e1-abfe-8f2d2ce74d4e" (UID: "83d721e2-4439-42e1-abfe-8f2d2ce74d4e"). InnerVolumeSpecName "kube-api-access-l4gh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:16:56.937386 kubelet[3023]: I1213 14:16:56.937249 3023 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l4gh7\" (UniqueName: \"kubernetes.io/projected/83d721e2-4439-42e1-abfe-8f2d2ce74d4e-kube-api-access-l4gh7\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:56.937635 kubelet[3023]: I1213 14:16:56.937601 3023 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cni-path\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:56.937803 kubelet[3023]: I1213 14:16:56.937775 3023 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-host-proc-sys-net\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:56.937985 kubelet[3023]: I1213 14:16:56.937951 3023 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-snmsn\" (UniqueName: \"kubernetes.io/projected/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-kube-api-access-snmsn\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:56.938211 kubelet[3023]: I1213 14:16:56.938179 3023 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-bpf-maps\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:56.938421 kubelet[3023]: I1213 14:16:56.938394 3023 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-etc-cni-netd\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:56.938584 kubelet[3023]: I1213 14:16:56.938557 3023 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-xtables-lock\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:56.938724 kubelet[3023]: I1213 14:16:56.938701 3023 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-clustermesh-secrets\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:56.938866 kubelet[3023]: I1213 14:16:56.938843 3023 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cilium-config-path\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:56.939027 kubelet[3023]: I1213 14:16:56.939004 3023 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-hostproc\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:56.939284 kubelet[3023]: I1213 14:16:56.939258 3023 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-host-proc-sys-kernel\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:56.939448 kubelet[3023]: I1213 14:16:56.939425 3023 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cilium-cgroup\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:56.939587 kubelet[3023]: I1213 14:16:56.939564 3023 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-hubble-tls\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:56.939729 kubelet[3023]: I1213 14:16:56.939708 3023 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83d721e2-4439-42e1-abfe-8f2d2ce74d4e-cilium-config-path\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:56.939845 kubelet[3023]: I1213 14:16:56.939825 3023 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8cae5bd-3f45-41e1-bc85-fad499e98dc9-cilium-run\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:16:57.394759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1385dd6545468053ec922c05a1ec856745d58cd0920e9760f88a3988d5c7f251-rootfs.mount: Deactivated successfully. Dec 13 14:16:57.395396 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1385dd6545468053ec922c05a1ec856745d58cd0920e9760f88a3988d5c7f251-shm.mount: Deactivated successfully. Dec 13 14:16:57.395811 systemd[1]: var-lib-kubelet-pods-83d721e2\x2d4439\x2d42e1\x2dabfe\x2d8f2d2ce74d4e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl4gh7.mount: Deactivated successfully. Dec 13 14:16:57.396234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88-rootfs.mount: Deactivated successfully. Dec 13 14:16:57.396655 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6af4e0fc49e6ef5a4f752d11eeece45f50785a1a1ec148698f4f342a854adf88-shm.mount: Deactivated successfully. Dec 13 14:16:57.397020 systemd[1]: var-lib-kubelet-pods-d8cae5bd\x2d3f45\x2d41e1\x2dbc85\x2dfad499e98dc9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsnmsn.mount: Deactivated successfully. Dec 13 14:16:57.397477 systemd[1]: var-lib-kubelet-pods-d8cae5bd\x2d3f45\x2d41e1\x2dbc85\x2dfad499e98dc9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:16:57.397843 systemd[1]: var-lib-kubelet-pods-d8cae5bd\x2d3f45\x2d41e1\x2dbc85\x2dfad499e98dc9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:16:57.555305 kubelet[3023]: I1213 14:16:57.555258 3023 scope.go:117] "RemoveContainer" containerID="f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff" Dec 13 14:16:57.559044 env[1849]: time="2024-12-13T14:16:57.558986133Z" level=info msg="RemoveContainer for \"f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff\"" Dec 13 14:16:57.571767 env[1849]: time="2024-12-13T14:16:57.571674633Z" level=info msg="RemoveContainer for \"f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff\" returns successfully" Dec 13 14:16:57.583017 kubelet[3023]: I1213 14:16:57.582975 3023 scope.go:117] "RemoveContainer" containerID="f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff" Dec 13 14:16:57.584266 env[1849]: time="2024-12-13T14:16:57.583907276Z" level=error msg="ContainerStatus for \"f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff\": not found" Dec 13 14:16:57.595832 kubelet[3023]: E1213 14:16:57.588393 3023 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff\": not found" containerID="f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff" Dec 13 14:16:57.595832 kubelet[3023]: I1213 14:16:57.588575 3023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff"} err="failed to get container status \"f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"f15cad811e701ca0b11eda122000c4b0b967e8c5bde7eb7a27828a117bf8c6ff\": not found" Dec 13 14:16:57.595832 kubelet[3023]: I1213 14:16:57.588606 3023 scope.go:117] "RemoveContainer" containerID="95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215" Dec 13 14:16:57.600372 env[1849]: time="2024-12-13T14:16:57.600292776Z" level=info msg="RemoveContainer for \"95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215\"" Dec 13 14:16:57.620969 env[1849]: time="2024-12-13T14:16:57.620891084Z" level=info msg="RemoveContainer for \"95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215\" returns successfully" Dec 13 14:16:57.622431 kubelet[3023]: I1213 14:16:57.622381 3023 scope.go:117] "RemoveContainer" containerID="59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef" Dec 13 14:16:57.629038 env[1849]: time="2024-12-13T14:16:57.628964595Z" level=info msg="RemoveContainer for \"59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef\"" Dec 13 14:16:57.640290 env[1849]: time="2024-12-13T14:16:57.640200518Z" level=info msg="RemoveContainer for \"59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef\" returns successfully" Dec 13 14:16:57.641004 kubelet[3023]: I1213 14:16:57.640948 3023 scope.go:117] "RemoveContainer" containerID="36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1" Dec 13 14:16:57.643828 env[1849]: time="2024-12-13T14:16:57.643749017Z" level=info msg="RemoveContainer for \"36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1\"" Dec 13 14:16:57.651201 env[1849]: time="2024-12-13T14:16:57.651022716Z" level=info msg="RemoveContainer for \"36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1\" returns successfully" Dec 13 14:16:57.652972 kubelet[3023]: I1213 14:16:57.652711 3023 scope.go:117] "RemoveContainer" containerID="f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c" Dec 13 14:16:57.655732 env[1849]: time="2024-12-13T14:16:57.655665293Z" level=info msg="RemoveContainer for \"f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c\"" Dec 13 14:16:57.662575 env[1849]: time="2024-12-13T14:16:57.662507279Z" level=info msg="RemoveContainer for \"f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c\" returns successfully" Dec 13 14:16:57.663450 kubelet[3023]: I1213 14:16:57.663381 3023 scope.go:117] "RemoveContainer" containerID="9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def" Dec 13 14:16:57.666004 env[1849]: time="2024-12-13T14:16:57.665901146Z" level=info msg="RemoveContainer for \"9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def\"" Dec 13 14:16:57.672622 env[1849]: time="2024-12-13T14:16:57.672539145Z" level=info msg="RemoveContainer for \"9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def\" returns successfully" Dec 13 14:16:57.673325 kubelet[3023]: I1213 14:16:57.673044 3023 scope.go:117] "RemoveContainer" containerID="95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215" Dec 13 14:16:57.673749 env[1849]: time="2024-12-13T14:16:57.673628182Z" level=error msg="ContainerStatus for \"95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215\": not found" Dec 13 14:16:57.674430 kubelet[3023]: E1213 14:16:57.674059 3023 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215\": not found" containerID="95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215" Dec 13 14:16:57.674430 kubelet[3023]: I1213 14:16:57.674214 3023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215"} err="failed to get container status \"95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215\": rpc error: code = NotFound desc = an error occurred when try to find container \"95b29bf9ad0335ec36fe9b269039e7e6e88c03e7d8545e4879e2985c50ee6215\": not found" Dec 13 14:16:57.674430 kubelet[3023]: I1213 14:16:57.674239 3023 scope.go:117] "RemoveContainer" containerID="59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef" Dec 13 14:16:57.674724 env[1849]: time="2024-12-13T14:16:57.674626331Z" level=error msg="ContainerStatus for \"59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef\": not found" Dec 13 14:16:57.675357 kubelet[3023]: E1213 14:16:57.674987 3023 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef\": not found" containerID="59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef" Dec 13 14:16:57.675357 kubelet[3023]: I1213 14:16:57.675146 3023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef"} err="failed to get container status \"59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef\": rpc error: code = NotFound desc = an error occurred when try to find container \"59bf7a2f6aec33dfa8527f2d380c5d54af4c1c020ebe9c9971d3be55f786ddef\": not found" Dec 13 14:16:57.675357 kubelet[3023]: I1213 14:16:57.675196 3023 scope.go:117] "RemoveContainer" containerID="36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1" Dec 13 14:16:57.675990 env[1849]: time="2024-12-13T14:16:57.675887448Z" level=error msg="ContainerStatus for \"36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1\": not found" Dec 13 14:16:57.676468 kubelet[3023]: E1213 14:16:57.676333 3023 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1\": not found" containerID="36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1" Dec 13 14:16:57.676789 kubelet[3023]: I1213 14:16:57.676650 3023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1"} err="failed to get container status \"36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"36960959e67b6b7947fd4554e2209f1f3bbb3aa0f9ddb2c482cdd9579f8361b1\": not found" Dec 13 14:16:57.676789 kubelet[3023]: I1213 14:16:57.676709 3023 scope.go:117] "RemoveContainer" containerID="f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c" Dec 13 14:16:57.677632 env[1849]: time="2024-12-13T14:16:57.677500417Z" level=error msg="ContainerStatus for \"f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c\": not found" Dec 13 14:16:57.678053 kubelet[3023]: E1213 14:16:57.678020 3023 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c\": not found" containerID="f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c" Dec 13 14:16:57.678333 kubelet[3023]: I1213 14:16:57.678289 3023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c"} err="failed to get container status \"f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f105f25127e23962b911b023b84c0e46a5a48ce177d7f4ad7c66fe95be15e02c\": not found" Dec 13 14:16:57.678483 kubelet[3023]: I1213 14:16:57.678461 3023 scope.go:117] "RemoveContainer" containerID="9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def" Dec 13 14:16:57.679206 env[1849]: time="2024-12-13T14:16:57.679031247Z" level=error msg="ContainerStatus for \"9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def\": not found" Dec 13 14:16:57.679568 kubelet[3023]: E1213 14:16:57.679529 3023 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def\": not found" containerID="9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def" Dec 13 14:16:57.679705 kubelet[3023]: I1213 14:16:57.679598 3023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def"} err="failed to get container status \"9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b6a43c3b34d9316268bae050eb52726b08b10789d3a8ae99a74d47035957def\": not found" Dec 13 14:16:58.080206 kubelet[3023]: I1213 14:16:58.080166 3023 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="83d721e2-4439-42e1-abfe-8f2d2ce74d4e" path="/var/lib/kubelet/pods/83d721e2-4439-42e1-abfe-8f2d2ce74d4e/volumes" Dec 13 14:16:58.082131 kubelet[3023]: I1213 14:16:58.082097 3023 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d8cae5bd-3f45-41e1-bc85-fad499e98dc9" path="/var/lib/kubelet/pods/d8cae5bd-3f45-41e1-bc85-fad499e98dc9/volumes" Dec 13 14:16:58.315758 sshd[4589]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:58.320836 systemd[1]: sshd@24-172.31.27.214:22-139.178.89.65:55690.service: Deactivated successfully. Dec 13 14:16:58.322375 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:16:58.324631 systemd-logind[1832]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:16:58.326848 systemd-logind[1832]: Removed session 25. Dec 13 14:16:58.343878 systemd[1]: Started sshd@25-172.31.27.214:22-139.178.89.65:52728.service. Dec 13 14:16:58.525863 sshd[4763]: Accepted publickey for core from 139.178.89.65 port 52728 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:58.529130 sshd[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:58.538706 systemd-logind[1832]: New session 26 of user core. Dec 13 14:16:58.539429 systemd[1]: Started session-26.scope. Dec 13 14:16:59.330812 kubelet[3023]: E1213 14:16:59.330771 3023 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:16:59.731675 kubelet[3023]: I1213 14:16:59.731484 3023 topology_manager.go:215] "Topology Admit Handler" podUID="ee574d2b-537d-4061-ab52-6425076d7d6c" podNamespace="kube-system" podName="cilium-5zxmb" Dec 13 14:16:59.731980 kubelet[3023]: E1213 14:16:59.731951 3023 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8cae5bd-3f45-41e1-bc85-fad499e98dc9" containerName="cilium-agent" Dec 13 14:16:59.732445 kubelet[3023]: E1213 14:16:59.732397 3023 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8cae5bd-3f45-41e1-bc85-fad499e98dc9" containerName="mount-cgroup" Dec 13 14:16:59.732650 kubelet[3023]: E1213 14:16:59.732625 3023 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8cae5bd-3f45-41e1-bc85-fad499e98dc9" containerName="apply-sysctl-overwrites" Dec 13 14:16:59.732807 kubelet[3023]: E1213 14:16:59.732782 3023 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8cae5bd-3f45-41e1-bc85-fad499e98dc9" containerName="mount-bpf-fs" Dec 13 14:16:59.732987 kubelet[3023]: E1213 14:16:59.732963 3023 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8cae5bd-3f45-41e1-bc85-fad499e98dc9" containerName="clean-cilium-state" Dec 13 14:16:59.733157 kubelet[3023]: E1213 14:16:59.733135 3023 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="83d721e2-4439-42e1-abfe-8f2d2ce74d4e" containerName="cilium-operator" Dec 13 14:16:59.733372 kubelet[3023]: I1213 14:16:59.733327 3023 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d721e2-4439-42e1-abfe-8f2d2ce74d4e" containerName="cilium-operator" Dec 13 14:16:59.733568 kubelet[3023]: I1213 14:16:59.733543 3023 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8cae5bd-3f45-41e1-bc85-fad499e98dc9" containerName="cilium-agent" Dec 13 14:16:59.768434 sshd[4763]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:59.775945 systemd[1]: sshd@25-172.31.27.214:22-139.178.89.65:52728.service: Deactivated successfully. Dec 13 14:16:59.778990 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:16:59.780581 systemd-logind[1832]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:16:59.782792 systemd-logind[1832]: Removed session 26. Dec 13 14:16:59.795053 systemd[1]: Started sshd@26-172.31.27.214:22-139.178.89.65:52732.service. Dec 13 14:16:59.863804 kubelet[3023]: I1213 14:16:59.863745 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-lib-modules\") pod \"cilium-5zxmb\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " pod="kube-system/cilium-5zxmb" Dec 13 14:16:59.864215 kubelet[3023]: I1213 14:16:59.864153 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-host-proc-sys-net\") pod \"cilium-5zxmb\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " pod="kube-system/cilium-5zxmb" Dec 13 14:16:59.864548 kubelet[3023]: I1213 14:16:59.864485 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ee574d2b-537d-4061-ab52-6425076d7d6c-hubble-tls\") pod \"cilium-5zxmb\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " pod="kube-system/cilium-5zxmb" Dec 13 14:16:59.864813 kubelet[3023]: I1213 14:16:59.864784 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-cgroup\") pod \"cilium-5zxmb\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " pod="kube-system/cilium-5zxmb" Dec 13 14:16:59.865048 kubelet[3023]: I1213 14:16:59.865021 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-host-proc-sys-kernel\") pod \"cilium-5zxmb\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " pod="kube-system/cilium-5zxmb" Dec 13 14:16:59.865329 kubelet[3023]: I1213 14:16:59.865305 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpj7f\" (UniqueName: \"kubernetes.io/projected/ee574d2b-537d-4061-ab52-6425076d7d6c-kube-api-access-kpj7f\") pod \"cilium-5zxmb\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " pod="kube-system/cilium-5zxmb" Dec 13 14:16:59.865566 kubelet[3023]: I1213 14:16:59.865536 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-bpf-maps\") pod \"cilium-5zxmb\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " pod="kube-system/cilium-5zxmb" Dec 13 14:16:59.865789 kubelet[3023]: I1213 14:16:59.865762 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-cni-path\") pod \"cilium-5zxmb\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " pod="kube-system/cilium-5zxmb" Dec 13 14:16:59.866000 kubelet[3023]: I1213 14:16:59.865976 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-run\") pod \"cilium-5zxmb\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " pod="kube-system/cilium-5zxmb" Dec 13 14:16:59.866214 kubelet[3023]: I1213 14:16:59.866191 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-xtables-lock\") pod \"cilium-5zxmb\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " pod="kube-system/cilium-5zxmb" Dec 13 14:16:59.866420 kubelet[3023]: I1213 14:16:59.866397 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ee574d2b-537d-4061-ab52-6425076d7d6c-clustermesh-secrets\") pod \"cilium-5zxmb\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " pod="kube-system/cilium-5zxmb" Dec 13 14:16:59.866607 kubelet[3023]: I1213 14:16:59.866585 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-ipsec-secrets\") pod \"cilium-5zxmb\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " pod="kube-system/cilium-5zxmb" Dec 13 14:16:59.873451 kubelet[3023]: I1213 14:16:59.873401 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-hostproc\") pod \"cilium-5zxmb\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " pod="kube-system/cilium-5zxmb" Dec 13 14:16:59.873756 kubelet[3023]: I1213 14:16:59.873719 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-etc-cni-netd\") pod \"cilium-5zxmb\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " pod="kube-system/cilium-5zxmb" Dec 13 14:16:59.874023 kubelet[3023]: I1213 14:16:59.873986 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-config-path\") pod \"cilium-5zxmb\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " pod="kube-system/cilium-5zxmb" Dec 13 14:17:00.036982 sshd[4774]: Accepted publickey for core from 139.178.89.65 port 52732 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:17:00.043550 sshd[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:17:00.058939 env[1849]: time="2024-12-13T14:17:00.055636450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5zxmb,Uid:ee574d2b-537d-4061-ab52-6425076d7d6c,Namespace:kube-system,Attempt:0,}" Dec 13 14:17:00.075035 systemd-logind[1832]: New session 27 of user core. Dec 13 14:17:00.080370 systemd[1]: Started session-27.scope. Dec 13 14:17:00.114176 env[1849]: time="2024-12-13T14:17:00.110327329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:17:00.114176 env[1849]: time="2024-12-13T14:17:00.110464777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:17:00.114176 env[1849]: time="2024-12-13T14:17:00.110494213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:17:00.114176 env[1849]: time="2024-12-13T14:17:00.111194077Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/184bc0c782ae026fae4245bd5bd6b2cf5a3363b7663d4e0435753f020ff0cc62 pid=4787 runtime=io.containerd.runc.v2 Dec 13 14:17:00.201800 env[1849]: time="2024-12-13T14:17:00.201732576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5zxmb,Uid:ee574d2b-537d-4061-ab52-6425076d7d6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"184bc0c782ae026fae4245bd5bd6b2cf5a3363b7663d4e0435753f020ff0cc62\"" Dec 13 14:17:00.211200 env[1849]: time="2024-12-13T14:17:00.208670119Z" level=info msg="CreateContainer within sandbox \"184bc0c782ae026fae4245bd5bd6b2cf5a3363b7663d4e0435753f020ff0cc62\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:17:00.236468 env[1849]: time="2024-12-13T14:17:00.236390636Z" level=info msg="CreateContainer within sandbox \"184bc0c782ae026fae4245bd5bd6b2cf5a3363b7663d4e0435753f020ff0cc62\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e309beea5148143866e33f8f35e752fd981ddd5d13067d62c0244998873c571b\"" Dec 13 14:17:00.238054 env[1849]: time="2024-12-13T14:17:00.237991222Z" level=info msg="StartContainer for \"e309beea5148143866e33f8f35e752fd981ddd5d13067d62c0244998873c571b\"" Dec 13 14:17:00.398055 env[1849]: time="2024-12-13T14:17:00.397977529Z" level=info msg="StartContainer for \"e309beea5148143866e33f8f35e752fd981ddd5d13067d62c0244998873c571b\" returns successfully" Dec 13 14:17:00.477541 sshd[4774]: pam_unix(sshd:session): session closed for user core Dec 13 14:17:00.483042 systemd[1]: sshd@26-172.31.27.214:22-139.178.89.65:52732.service: Deactivated successfully. Dec 13 14:17:00.484830 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:17:00.488933 systemd-logind[1832]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:17:00.497514 systemd-logind[1832]: Removed session 27. Dec 13 14:17:00.504899 systemd[1]: Started sshd@27-172.31.27.214:22-139.178.89.65:52736.service. Dec 13 14:17:00.540745 env[1849]: time="2024-12-13T14:17:00.540642419Z" level=info msg="shim disconnected" id=e309beea5148143866e33f8f35e752fd981ddd5d13067d62c0244998873c571b Dec 13 14:17:00.541421 env[1849]: time="2024-12-13T14:17:00.541359192Z" level=warning msg="cleaning up after shim disconnected" id=e309beea5148143866e33f8f35e752fd981ddd5d13067d62c0244998873c571b namespace=k8s.io Dec 13 14:17:00.541619 env[1849]: time="2024-12-13T14:17:00.541590108Z" level=info msg="cleaning up dead shim" Dec 13 14:17:00.558034 env[1849]: time="2024-12-13T14:17:00.557967231Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4883 runtime=io.containerd.runc.v2\n" Dec 13 14:17:00.596383 env[1849]: time="2024-12-13T14:17:00.596271039Z" level=info msg="StopPodSandbox for \"184bc0c782ae026fae4245bd5bd6b2cf5a3363b7663d4e0435753f020ff0cc62\"" Dec 13 14:17:00.596595 env[1849]: time="2024-12-13T14:17:00.596429007Z" level=info msg="Container to stop \"e309beea5148143866e33f8f35e752fd981ddd5d13067d62c0244998873c571b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:17:00.665132 env[1849]: time="2024-12-13T14:17:00.664893378Z" level=info msg="shim disconnected" id=184bc0c782ae026fae4245bd5bd6b2cf5a3363b7663d4e0435753f020ff0cc62 Dec 13 14:17:00.665132 env[1849]: time="2024-12-13T14:17:00.664990986Z" level=warning msg="cleaning up after shim disconnected" id=184bc0c782ae026fae4245bd5bd6b2cf5a3363b7663d4e0435753f020ff0cc62 namespace=k8s.io Dec 13 14:17:00.665132 env[1849]: time="2024-12-13T14:17:00.665018394Z" level=info msg="cleaning up dead shim" Dec 13 14:17:00.684676 env[1849]: time="2024-12-13T14:17:00.684533472Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4919 runtime=io.containerd.runc.v2\n" Dec 13 14:17:00.685309 env[1849]: time="2024-12-13T14:17:00.685238640Z" level=info msg="TearDown network for sandbox \"184bc0c782ae026fae4245bd5bd6b2cf5a3363b7663d4e0435753f020ff0cc62\" successfully" Dec 13 14:17:00.685309 env[1849]: time="2024-12-13T14:17:00.685297404Z" level=info msg="StopPodSandbox for \"184bc0c782ae026fae4245bd5bd6b2cf5a3363b7663d4e0435753f020ff0cc62\" returns successfully" Dec 13 14:17:00.707122 sshd[4882]: Accepted publickey for core from 139.178.89.65 port 52736 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:17:00.711312 sshd[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:17:00.731958 systemd[1]: Started session-28.scope. Dec 13 14:17:00.734189 systemd-logind[1832]: New session 28 of user core. Dec 13 14:17:00.795138 kubelet[3023]: I1213 14:17:00.794756 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-host-proc-sys-net\") pod \"ee574d2b-537d-4061-ab52-6425076d7d6c\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " Dec 13 14:17:00.795138 kubelet[3023]: I1213 14:17:00.794884 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ee574d2b-537d-4061-ab52-6425076d7d6c-hubble-tls\") pod \"ee574d2b-537d-4061-ab52-6425076d7d6c\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " Dec 13 14:17:00.795138 kubelet[3023]: I1213 14:17:00.794928 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-lib-modules\") pod \"ee574d2b-537d-4061-ab52-6425076d7d6c\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " Dec 13 14:17:00.795138 kubelet[3023]: I1213 14:17:00.794992 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-run\") pod \"ee574d2b-537d-4061-ab52-6425076d7d6c\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " Dec 13 14:17:00.795138 kubelet[3023]: I1213 14:17:00.795061 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-xtables-lock\") pod \"ee574d2b-537d-4061-ab52-6425076d7d6c\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " Dec 13 14:17:00.796056 kubelet[3023]: I1213 14:17:00.795173 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ee574d2b-537d-4061-ab52-6425076d7d6c-clustermesh-secrets\") pod \"ee574d2b-537d-4061-ab52-6425076d7d6c\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " Dec 13 14:17:00.796056 kubelet[3023]: I1213 14:17:00.795248 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-ipsec-secrets\") pod \"ee574d2b-537d-4061-ab52-6425076d7d6c\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " Dec 13 14:17:00.796056 kubelet[3023]: I1213 14:17:00.795321 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-hostproc\") pod \"ee574d2b-537d-4061-ab52-6425076d7d6c\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " Dec 13 14:17:00.796056 kubelet[3023]: I1213 14:17:00.795465 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-config-path\") pod \"ee574d2b-537d-4061-ab52-6425076d7d6c\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " Dec 13 14:17:00.796056 kubelet[3023]: I1213 14:17:00.795538 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-cni-path\") pod \"ee574d2b-537d-4061-ab52-6425076d7d6c\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " Dec 13 14:17:00.796056 kubelet[3023]: I1213 14:17:00.795586 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-bpf-maps\") pod \"ee574d2b-537d-4061-ab52-6425076d7d6c\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " Dec 13 14:17:00.800662 kubelet[3023]: I1213 14:17:00.795657 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-cgroup\") pod \"ee574d2b-537d-4061-ab52-6425076d7d6c\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " Dec 13 14:17:00.800662 kubelet[3023]: I1213 14:17:00.795735 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpj7f\" (UniqueName: \"kubernetes.io/projected/ee574d2b-537d-4061-ab52-6425076d7d6c-kube-api-access-kpj7f\") pod \"ee574d2b-537d-4061-ab52-6425076d7d6c\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " Dec 13 14:17:00.800662 kubelet[3023]: I1213 14:17:00.795806 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-etc-cni-netd\") pod \"ee574d2b-537d-4061-ab52-6425076d7d6c\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " Dec 13 14:17:00.800662 kubelet[3023]: I1213 14:17:00.795856 3023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-host-proc-sys-kernel\") pod \"ee574d2b-537d-4061-ab52-6425076d7d6c\" (UID: \"ee574d2b-537d-4061-ab52-6425076d7d6c\") " Dec 13 14:17:00.800662 kubelet[3023]: I1213 14:17:00.796018 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ee574d2b-537d-4061-ab52-6425076d7d6c" (UID: "ee574d2b-537d-4061-ab52-6425076d7d6c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.800982 kubelet[3023]: I1213 14:17:00.796131 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ee574d2b-537d-4061-ab52-6425076d7d6c" (UID: "ee574d2b-537d-4061-ab52-6425076d7d6c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.800982 kubelet[3023]: I1213 14:17:00.796155 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-hostproc" (OuterVolumeSpecName: "hostproc") pod "ee574d2b-537d-4061-ab52-6425076d7d6c" (UID: "ee574d2b-537d-4061-ab52-6425076d7d6c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.800982 kubelet[3023]: I1213 14:17:00.797231 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-cni-path" (OuterVolumeSpecName: "cni-path") pod "ee574d2b-537d-4061-ab52-6425076d7d6c" (UID: "ee574d2b-537d-4061-ab52-6425076d7d6c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.800982 kubelet[3023]: I1213 14:17:00.797328 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ee574d2b-537d-4061-ab52-6425076d7d6c" (UID: "ee574d2b-537d-4061-ab52-6425076d7d6c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.800982 kubelet[3023]: I1213 14:17:00.797383 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ee574d2b-537d-4061-ab52-6425076d7d6c" (UID: "ee574d2b-537d-4061-ab52-6425076d7d6c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.801407 kubelet[3023]: I1213 14:17:00.797153 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ee574d2b-537d-4061-ab52-6425076d7d6c" (UID: "ee574d2b-537d-4061-ab52-6425076d7d6c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.801407 kubelet[3023]: I1213 14:17:00.798122 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ee574d2b-537d-4061-ab52-6425076d7d6c" (UID: "ee574d2b-537d-4061-ab52-6425076d7d6c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.801407 kubelet[3023]: I1213 14:17:00.798204 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ee574d2b-537d-4061-ab52-6425076d7d6c" (UID: "ee574d2b-537d-4061-ab52-6425076d7d6c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.801407 kubelet[3023]: I1213 14:17:00.799575 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ee574d2b-537d-4061-ab52-6425076d7d6c" (UID: "ee574d2b-537d-4061-ab52-6425076d7d6c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.806798 kubelet[3023]: I1213 14:17:00.806738 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ee574d2b-537d-4061-ab52-6425076d7d6c" (UID: "ee574d2b-537d-4061-ab52-6425076d7d6c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:17:00.809562 kubelet[3023]: I1213 14:17:00.809487 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee574d2b-537d-4061-ab52-6425076d7d6c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ee574d2b-537d-4061-ab52-6425076d7d6c" (UID: "ee574d2b-537d-4061-ab52-6425076d7d6c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:17:00.810531 kubelet[3023]: I1213 14:17:00.810472 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee574d2b-537d-4061-ab52-6425076d7d6c-kube-api-access-kpj7f" (OuterVolumeSpecName: "kube-api-access-kpj7f") pod "ee574d2b-537d-4061-ab52-6425076d7d6c" (UID: "ee574d2b-537d-4061-ab52-6425076d7d6c"). InnerVolumeSpecName "kube-api-access-kpj7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:17:00.813061 kubelet[3023]: I1213 14:17:00.812995 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee574d2b-537d-4061-ab52-6425076d7d6c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ee574d2b-537d-4061-ab52-6425076d7d6c" (UID: "ee574d2b-537d-4061-ab52-6425076d7d6c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:17:00.817017 kubelet[3023]: I1213 14:17:00.816953 3023 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ee574d2b-537d-4061-ab52-6425076d7d6c" (UID: "ee574d2b-537d-4061-ab52-6425076d7d6c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:17:00.896954 kubelet[3023]: I1213 14:17:00.896876 3023 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ee574d2b-537d-4061-ab52-6425076d7d6c-clustermesh-secrets\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:17:00.896954 kubelet[3023]: I1213 14:17:00.896948 3023 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-ipsec-secrets\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:17:00.897299 kubelet[3023]: I1213 14:17:00.896982 3023 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-hostproc\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:17:00.897299 kubelet[3023]: I1213 14:17:00.897009 3023 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-config-path\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:17:00.897299 kubelet[3023]: I1213 14:17:00.897034 3023 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-bpf-maps\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:17:00.897299 kubelet[3023]: I1213 14:17:00.897060 3023 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-cni-path\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:17:00.897299 kubelet[3023]: I1213 14:17:00.897106 3023 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-cgroup\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:17:00.897299 kubelet[3023]: I1213 14:17:00.897149 3023 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kpj7f\" (UniqueName: \"kubernetes.io/projected/ee574d2b-537d-4061-ab52-6425076d7d6c-kube-api-access-kpj7f\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:17:00.897299 kubelet[3023]: I1213 14:17:00.897177 3023 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-etc-cni-netd\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:17:00.897299 kubelet[3023]: I1213 14:17:00.897205 3023 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-host-proc-sys-kernel\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:17:00.897800 kubelet[3023]: I1213 14:17:00.897230 3023 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-host-proc-sys-net\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:17:00.897800 kubelet[3023]: I1213 14:17:00.897256 3023 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ee574d2b-537d-4061-ab52-6425076d7d6c-hubble-tls\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:17:00.897800 kubelet[3023]: I1213 14:17:00.897280 3023 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-lib-modules\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:17:00.897800 kubelet[3023]: I1213 14:17:00.897305 3023 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-cilium-run\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:17:00.897800 kubelet[3023]: I1213 14:17:00.897333 3023 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee574d2b-537d-4061-ab52-6425076d7d6c-xtables-lock\") on node \"ip-172-31-27-214\" DevicePath \"\"" Dec 13 14:17:00.985351 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-184bc0c782ae026fae4245bd5bd6b2cf5a3363b7663d4e0435753f020ff0cc62-shm.mount: Deactivated successfully. Dec 13 14:17:00.986435 systemd[1]: var-lib-kubelet-pods-ee574d2b\x2d537d\x2d4061\x2dab52\x2d6425076d7d6c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkpj7f.mount: Deactivated successfully. Dec 13 14:17:00.987044 systemd[1]: var-lib-kubelet-pods-ee574d2b\x2d537d\x2d4061\x2dab52\x2d6425076d7d6c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:17:00.987665 systemd[1]: var-lib-kubelet-pods-ee574d2b\x2d537d\x2d4061\x2dab52\x2d6425076d7d6c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:17:00.988301 systemd[1]: var-lib-kubelet-pods-ee574d2b\x2d537d\x2d4061\x2dab52\x2d6425076d7d6c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:17:01.590556 kubelet[3023]: I1213 14:17:01.590515 3023 scope.go:117] "RemoveContainer" containerID="e309beea5148143866e33f8f35e752fd981ddd5d13067d62c0244998873c571b" Dec 13 14:17:01.594609 env[1849]: time="2024-12-13T14:17:01.594027500Z" level=info msg="RemoveContainer for \"e309beea5148143866e33f8f35e752fd981ddd5d13067d62c0244998873c571b\"" Dec 13 14:17:01.602907 env[1849]: time="2024-12-13T14:17:01.602834500Z" level=info msg="RemoveContainer for \"e309beea5148143866e33f8f35e752fd981ddd5d13067d62c0244998873c571b\" returns successfully" Dec 13 14:17:01.670300 kubelet[3023]: I1213 14:17:01.670243 3023 topology_manager.go:215] "Topology Admit Handler" podUID="efb113e0-e967-485d-bee9-e56d1b4c5f88" podNamespace="kube-system" podName="cilium-qbrdj" Dec 13 14:17:01.670511 kubelet[3023]: E1213 14:17:01.670337 3023 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee574d2b-537d-4061-ab52-6425076d7d6c" containerName="mount-cgroup" Dec 13 14:17:01.670511 kubelet[3023]: I1213 14:17:01.670394 3023 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee574d2b-537d-4061-ab52-6425076d7d6c" containerName="mount-cgroup" Dec 13 14:17:01.804569 kubelet[3023]: I1213 14:17:01.804505 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/efb113e0-e967-485d-bee9-e56d1b4c5f88-hostproc\") pod \"cilium-qbrdj\" (UID: \"efb113e0-e967-485d-bee9-e56d1b4c5f88\") " pod="kube-system/cilium-qbrdj" Dec 13 14:17:01.805528 kubelet[3023]: I1213 14:17:01.805484 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/efb113e0-e967-485d-bee9-e56d1b4c5f88-etc-cni-netd\") pod \"cilium-qbrdj\" (UID: \"efb113e0-e967-485d-bee9-e56d1b4c5f88\") " pod="kube-system/cilium-qbrdj" Dec 13 14:17:01.805770 kubelet[3023]: I1213 14:17:01.805740 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/efb113e0-e967-485d-bee9-e56d1b4c5f88-clustermesh-secrets\") pod \"cilium-qbrdj\" (UID: \"efb113e0-e967-485d-bee9-e56d1b4c5f88\") " pod="kube-system/cilium-qbrdj" Dec 13 14:17:01.805988 kubelet[3023]: I1213 14:17:01.805961 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/efb113e0-e967-485d-bee9-e56d1b4c5f88-host-proc-sys-net\") pod \"cilium-qbrdj\" (UID: \"efb113e0-e967-485d-bee9-e56d1b4c5f88\") " pod="kube-system/cilium-qbrdj" Dec 13 14:17:01.806233 kubelet[3023]: I1213 14:17:01.806203 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/efb113e0-e967-485d-bee9-e56d1b4c5f88-bpf-maps\") pod \"cilium-qbrdj\" (UID: \"efb113e0-e967-485d-bee9-e56d1b4c5f88\") " pod="kube-system/cilium-qbrdj" Dec 13 14:17:01.806427 kubelet[3023]: I1213 14:17:01.806400 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efb113e0-e967-485d-bee9-e56d1b4c5f88-lib-modules\") pod \"cilium-qbrdj\" (UID: \"efb113e0-e967-485d-bee9-e56d1b4c5f88\") " pod="kube-system/cilium-qbrdj" Dec 13 14:17:01.806674 kubelet[3023]: I1213 14:17:01.806644 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9872t\" (UniqueName: \"kubernetes.io/projected/efb113e0-e967-485d-bee9-e56d1b4c5f88-kube-api-access-9872t\") pod \"cilium-qbrdj\" (UID: \"efb113e0-e967-485d-bee9-e56d1b4c5f88\") " pod="kube-system/cilium-qbrdj" Dec 13 14:17:01.806888 kubelet[3023]: I1213 14:17:01.806859 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efb113e0-e967-485d-bee9-e56d1b4c5f88-xtables-lock\") pod \"cilium-qbrdj\" (UID: \"efb113e0-e967-485d-bee9-e56d1b4c5f88\") " pod="kube-system/cilium-qbrdj" Dec 13 14:17:01.807187 kubelet[3023]: I1213 14:17:01.807152 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/efb113e0-e967-485d-bee9-e56d1b4c5f88-cilium-ipsec-secrets\") pod \"cilium-qbrdj\" (UID: \"efb113e0-e967-485d-bee9-e56d1b4c5f88\") " pod="kube-system/cilium-qbrdj" Dec 13 14:17:01.807463 kubelet[3023]: I1213 14:17:01.807432 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/efb113e0-e967-485d-bee9-e56d1b4c5f88-cilium-cgroup\") pod \"cilium-qbrdj\" (UID: \"efb113e0-e967-485d-bee9-e56d1b4c5f88\") " pod="kube-system/cilium-qbrdj" Dec 13 14:17:01.807825 kubelet[3023]: I1213 14:17:01.807784 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/efb113e0-e967-485d-bee9-e56d1b4c5f88-cni-path\") pod \"cilium-qbrdj\" (UID: \"efb113e0-e967-485d-bee9-e56d1b4c5f88\") " pod="kube-system/cilium-qbrdj" Dec 13 14:17:01.808107 kubelet[3023]: I1213 14:17:01.808053 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efb113e0-e967-485d-bee9-e56d1b4c5f88-cilium-config-path\") pod \"cilium-qbrdj\" (UID: \"efb113e0-e967-485d-bee9-e56d1b4c5f88\") " pod="kube-system/cilium-qbrdj" Dec 13 14:17:01.808318 kubelet[3023]: I1213 14:17:01.808292 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/efb113e0-e967-485d-bee9-e56d1b4c5f88-hubble-tls\") pod \"cilium-qbrdj\" (UID: \"efb113e0-e967-485d-bee9-e56d1b4c5f88\") " pod="kube-system/cilium-qbrdj" Dec 13 14:17:01.808546 kubelet[3023]: I1213 14:17:01.808519 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/efb113e0-e967-485d-bee9-e56d1b4c5f88-cilium-run\") pod \"cilium-qbrdj\" (UID: \"efb113e0-e967-485d-bee9-e56d1b4c5f88\") " pod="kube-system/cilium-qbrdj" Dec 13 14:17:01.808724 kubelet[3023]: I1213 14:17:01.808702 3023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/efb113e0-e967-485d-bee9-e56d1b4c5f88-host-proc-sys-kernel\") pod \"cilium-qbrdj\" (UID: \"efb113e0-e967-485d-bee9-e56d1b4c5f88\") " pod="kube-system/cilium-qbrdj" Dec 13 14:17:02.080457 kubelet[3023]: I1213 14:17:02.080381 3023 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ee574d2b-537d-4061-ab52-6425076d7d6c" path="/var/lib/kubelet/pods/ee574d2b-537d-4061-ab52-6425076d7d6c/volumes" Dec 13 14:17:02.288226 env[1849]: time="2024-12-13T14:17:02.288134814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qbrdj,Uid:efb113e0-e967-485d-bee9-e56d1b4c5f88,Namespace:kube-system,Attempt:0,}" Dec 13 14:17:02.324039 env[1849]: time="2024-12-13T14:17:02.323916986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:17:02.324380 env[1849]: time="2024-12-13T14:17:02.324313383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:17:02.324586 env[1849]: time="2024-12-13T14:17:02.324529743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:17:02.325264 env[1849]: time="2024-12-13T14:17:02.325193547Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a441bd58033929df801134826fa2962472b055543d8242fbfee099921b726346 pid=4956 runtime=io.containerd.runc.v2 Dec 13 14:17:02.413192 env[1849]: time="2024-12-13T14:17:02.412699198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qbrdj,Uid:efb113e0-e967-485d-bee9-e56d1b4c5f88,Namespace:kube-system,Attempt:0,} returns sandbox id \"a441bd58033929df801134826fa2962472b055543d8242fbfee099921b726346\"" Dec 13 14:17:02.420362 env[1849]: time="2024-12-13T14:17:02.420293705Z" level=info msg="CreateContainer within sandbox \"a441bd58033929df801134826fa2962472b055543d8242fbfee099921b726346\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:17:02.454235 env[1849]: time="2024-12-13T14:17:02.454162079Z" level=info msg="CreateContainer within sandbox \"a441bd58033929df801134826fa2962472b055543d8242fbfee099921b726346\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5cdc052992028b1b3fa20871183693083dc63417f5f4aca1eb43b9d131233441\"" Dec 13 14:17:02.455594 env[1849]: time="2024-12-13T14:17:02.455506824Z" level=info msg="StartContainer for \"5cdc052992028b1b3fa20871183693083dc63417f5f4aca1eb43b9d131233441\"" Dec 13 14:17:02.561126 env[1849]: time="2024-12-13T14:17:02.561006527Z" level=info msg="StartContainer for \"5cdc052992028b1b3fa20871183693083dc63417f5f4aca1eb43b9d131233441\" returns successfully" Dec 13 14:17:02.659035 env[1849]: time="2024-12-13T14:17:02.658967858Z" level=info msg="shim disconnected" id=5cdc052992028b1b3fa20871183693083dc63417f5f4aca1eb43b9d131233441 Dec 13 14:17:02.660148 env[1849]: time="2024-12-13T14:17:02.660100227Z" level=warning msg="cleaning up after shim disconnected" id=5cdc052992028b1b3fa20871183693083dc63417f5f4aca1eb43b9d131233441 namespace=k8s.io Dec 13 14:17:02.660323 env[1849]: time="2024-12-13T14:17:02.660292132Z" level=info msg="cleaning up dead shim" Dec 13 14:17:02.676628 env[1849]: time="2024-12-13T14:17:02.676108830Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5041 runtime=io.containerd.runc.v2\n" Dec 13 14:17:02.986143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2851470565.mount: Deactivated successfully. Dec 13 14:17:03.619827 env[1849]: time="2024-12-13T14:17:03.619251265Z" level=info msg="CreateContainer within sandbox \"a441bd58033929df801134826fa2962472b055543d8242fbfee099921b726346\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:17:03.651315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4243885568.mount: Deactivated successfully. Dec 13 14:17:03.661922 env[1849]: time="2024-12-13T14:17:03.661805174Z" level=info msg="CreateContainer within sandbox \"a441bd58033929df801134826fa2962472b055543d8242fbfee099921b726346\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8b5005a72be19b6843a6a3eb1598c2a7a987ad0b1998a1f1f0ddd5d7e2e21b74\"" Dec 13 14:17:03.664597 env[1849]: time="2024-12-13T14:17:03.662941695Z" level=info msg="StartContainer for \"8b5005a72be19b6843a6a3eb1598c2a7a987ad0b1998a1f1f0ddd5d7e2e21b74\"" Dec 13 14:17:03.795100 env[1849]: time="2024-12-13T14:17:03.795002484Z" level=info msg="StartContainer for \"8b5005a72be19b6843a6a3eb1598c2a7a987ad0b1998a1f1f0ddd5d7e2e21b74\" returns successfully" Dec 13 14:17:03.844950 env[1849]: time="2024-12-13T14:17:03.844877348Z" level=info msg="shim disconnected" id=8b5005a72be19b6843a6a3eb1598c2a7a987ad0b1998a1f1f0ddd5d7e2e21b74 Dec 13 14:17:03.845516 env[1849]: time="2024-12-13T14:17:03.845458305Z" level=warning msg="cleaning up after shim disconnected" id=8b5005a72be19b6843a6a3eb1598c2a7a987ad0b1998a1f1f0ddd5d7e2e21b74 namespace=k8s.io Dec 13 14:17:03.845683 env[1849]: time="2024-12-13T14:17:03.845654145Z" level=info msg="cleaning up dead shim" Dec 13 14:17:03.860651 env[1849]: time="2024-12-13T14:17:03.860591878Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5105 runtime=io.containerd.runc.v2\n" Dec 13 14:17:03.986169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b5005a72be19b6843a6a3eb1598c2a7a987ad0b1998a1f1f0ddd5d7e2e21b74-rootfs.mount: Deactivated successfully. Dec 13 14:17:04.333720 kubelet[3023]: E1213 14:17:04.333661 3023 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:17:04.645585 env[1849]: time="2024-12-13T14:17:04.645128639Z" level=info msg="CreateContainer within sandbox \"a441bd58033929df801134826fa2962472b055543d8242fbfee099921b726346\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:17:04.681865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008616969.mount: Deactivated successfully. Dec 13 14:17:04.692351 env[1849]: time="2024-12-13T14:17:04.692265244Z" level=info msg="CreateContainer within sandbox \"a441bd58033929df801134826fa2962472b055543d8242fbfee099921b726346\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef47f74cbc17ea76f47b4ad2881c578e17f3f62747256210abda58d25d837bd3\"" Dec 13 14:17:04.693731 env[1849]: time="2024-12-13T14:17:04.693678089Z" level=info msg="StartContainer for \"ef47f74cbc17ea76f47b4ad2881c578e17f3f62747256210abda58d25d837bd3\"" Dec 13 14:17:04.830525 env[1849]: time="2024-12-13T14:17:04.830457725Z" level=info msg="StartContainer for \"ef47f74cbc17ea76f47b4ad2881c578e17f3f62747256210abda58d25d837bd3\" returns successfully" Dec 13 14:17:04.869927 env[1849]: time="2024-12-13T14:17:04.869855152Z" level=info msg="shim disconnected" id=ef47f74cbc17ea76f47b4ad2881c578e17f3f62747256210abda58d25d837bd3 Dec 13 14:17:04.870256 env[1849]: time="2024-12-13T14:17:04.869929552Z" level=warning msg="cleaning up after shim disconnected" id=ef47f74cbc17ea76f47b4ad2881c578e17f3f62747256210abda58d25d837bd3 namespace=k8s.io Dec 13 14:17:04.870256 env[1849]: time="2024-12-13T14:17:04.870158164Z" level=info msg="cleaning up dead shim" Dec 13 14:17:04.885880 env[1849]: time="2024-12-13T14:17:04.885815634Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5162 runtime=io.containerd.runc.v2\n" Dec 13 14:17:04.986293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef47f74cbc17ea76f47b4ad2881c578e17f3f62747256210abda58d25d837bd3-rootfs.mount: Deactivated successfully. Dec 13 14:17:05.633837 env[1849]: time="2024-12-13T14:17:05.633756927Z" level=info msg="CreateContainer within sandbox \"a441bd58033929df801134826fa2962472b055543d8242fbfee099921b726346\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:17:05.669241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679692218.mount: Deactivated successfully. Dec 13 14:17:05.710523 env[1849]: time="2024-12-13T14:17:05.710437197Z" level=info msg="CreateContainer within sandbox \"a441bd58033929df801134826fa2962472b055543d8242fbfee099921b726346\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"03310435d138e2cbae675137ca23911582e0854a09e2390690e83a287d99eafd\"" Dec 13 14:17:05.712269 env[1849]: time="2024-12-13T14:17:05.712133806Z" level=info msg="StartContainer for \"03310435d138e2cbae675137ca23911582e0854a09e2390690e83a287d99eafd\"" Dec 13 14:17:05.835033 env[1849]: time="2024-12-13T14:17:05.834944277Z" level=info msg="StartContainer for \"03310435d138e2cbae675137ca23911582e0854a09e2390690e83a287d99eafd\" returns successfully" Dec 13 14:17:05.875966 env[1849]: time="2024-12-13T14:17:05.875904260Z" level=info msg="shim disconnected" id=03310435d138e2cbae675137ca23911582e0854a09e2390690e83a287d99eafd Dec 13 14:17:05.876523 env[1849]: time="2024-12-13T14:17:05.876473469Z" level=warning msg="cleaning up after shim disconnected" id=03310435d138e2cbae675137ca23911582e0854a09e2390690e83a287d99eafd namespace=k8s.io Dec 13 14:17:05.876705 env[1849]: time="2024-12-13T14:17:05.876676725Z" level=info msg="cleaning up dead shim" Dec 13 14:17:05.892115 env[1849]: time="2024-12-13T14:17:05.891427750Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5216 runtime=io.containerd.runc.v2\n" Dec 13 14:17:05.986300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03310435d138e2cbae675137ca23911582e0854a09e2390690e83a287d99eafd-rootfs.mount: Deactivated successfully. Dec 13 14:17:06.595170 kubelet[3023]: I1213 14:17:06.594805 3023 setters.go:568] "Node became not ready" node="ip-172-31-27-214" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:17:06Z","lastTransitionTime":"2024-12-13T14:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:17:06.651768 env[1849]: time="2024-12-13T14:17:06.651669826Z" level=info msg="CreateContainer within sandbox \"a441bd58033929df801134826fa2962472b055543d8242fbfee099921b726346\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:17:06.707450 env[1849]: time="2024-12-13T14:17:06.707352874Z" level=info msg="CreateContainer within sandbox \"a441bd58033929df801134826fa2962472b055543d8242fbfee099921b726346\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"239bd8a6505a50de846e0717eb46d6751d82efb735dfa12e19d5186d2da61694\"" Dec 13 14:17:06.708826 env[1849]: time="2024-12-13T14:17:06.708576011Z" level=info msg="StartContainer for \"239bd8a6505a50de846e0717eb46d6751d82efb735dfa12e19d5186d2da61694\"" Dec 13 14:17:06.861417 env[1849]: time="2024-12-13T14:17:06.861260794Z" level=info msg="StartContainer for \"239bd8a6505a50de846e0717eb46d6751d82efb735dfa12e19d5186d2da61694\" returns successfully" Dec 13 14:17:07.686119 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Dec 13 14:17:09.338053 systemd[1]: run-containerd-runc-k8s.io-239bd8a6505a50de846e0717eb46d6751d82efb735dfa12e19d5186d2da61694-runc.Y2g9hB.mount: Deactivated successfully. Dec 13 14:17:11.620829 systemd[1]: run-containerd-runc-k8s.io-239bd8a6505a50de846e0717eb46d6751d82efb735dfa12e19d5186d2da61694-runc.xLxpJg.mount: Deactivated successfully. Dec 13 14:17:12.195191 systemd-networkd[1512]: lxc_health: Link UP Dec 13 14:17:12.205509 (udev-worker)[5785]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:17:12.217044 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:17:12.215761 systemd-networkd[1512]: lxc_health: Gained carrier Dec 13 14:17:12.323936 kubelet[3023]: I1213 14:17:12.323870 3023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qbrdj" podStartSLOduration=11.323807033 podStartE2EDuration="11.323807033s" podCreationTimestamp="2024-12-13 14:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:17:07.683855732 +0000 UTC m=+133.934560596" watchObservedRunningTime="2024-12-13 14:17:12.323807033 +0000 UTC m=+138.574511873" Dec 13 14:17:13.784341 systemd-networkd[1512]: lxc_health: Gained IPv6LL Dec 13 14:17:16.346271 systemd[1]: run-containerd-runc-k8s.io-239bd8a6505a50de846e0717eb46d6751d82efb735dfa12e19d5186d2da61694-runc.qxILR2.mount: Deactivated successfully. Dec 13 14:17:18.700004 systemd[1]: run-containerd-runc-k8s.io-239bd8a6505a50de846e0717eb46d6751d82efb735dfa12e19d5186d2da61694-runc.ScyLYy.mount: Deactivated successfully. Dec 13 14:17:18.842686 sshd[4882]: pam_unix(sshd:session): session closed for user core Dec 13 14:17:18.849814 systemd[1]: sshd@27-172.31.27.214:22-139.178.89.65:52736.service: Deactivated successfully. Dec 13 14:17:18.851579 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 14:17:18.853643 systemd-logind[1832]: Session 28 logged out. Waiting for processes to exit. Dec 13 14:17:18.857379 systemd-logind[1832]: Removed session 28. Dec 13 14:17:32.311561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-855152d07892a305e94a98566c47a0529caed0e9495f243bee320692093cb997-rootfs.mount: Deactivated successfully. Dec 13 14:17:32.354368 env[1849]: time="2024-12-13T14:17:32.354303565Z" level=info msg="shim disconnected" id=855152d07892a305e94a98566c47a0529caed0e9495f243bee320692093cb997 Dec 13 14:17:32.356200 env[1849]: time="2024-12-13T14:17:32.356147366Z" level=warning msg="cleaning up after shim disconnected" id=855152d07892a305e94a98566c47a0529caed0e9495f243bee320692093cb997 namespace=k8s.io Dec 13 14:17:32.356389 env[1849]: time="2024-12-13T14:17:32.356360174Z" level=info msg="cleaning up dead shim" Dec 13 14:17:32.370466 env[1849]: time="2024-12-13T14:17:32.370407744Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5898 runtime=io.containerd.runc.v2\n" Dec 13 14:17:32.739282 kubelet[3023]: I1213 14:17:32.739234 3023 scope.go:117] "RemoveContainer" containerID="855152d07892a305e94a98566c47a0529caed0e9495f243bee320692093cb997" Dec 13 14:17:32.744138 env[1849]: time="2024-12-13T14:17:32.744035995Z" level=info msg="CreateContainer within sandbox \"5db3b65b73b51b39008232a89f3c3cb3ad74e9800a78be46d277f702e62f0704\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 14:17:32.773118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2073261192.mount: Deactivated successfully. Dec 13 14:17:32.787340 env[1849]: time="2024-12-13T14:17:32.787277293Z" level=info msg="CreateContainer within sandbox \"5db3b65b73b51b39008232a89f3c3cb3ad74e9800a78be46d277f702e62f0704\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6003a14fd7bc3b1943476823cbe380ac6109b0eb710f25be0bc9ebbd1520dd18\"" Dec 13 14:17:32.788318 env[1849]: time="2024-12-13T14:17:32.788228366Z" level=info msg="StartContainer for \"6003a14fd7bc3b1943476823cbe380ac6109b0eb710f25be0bc9ebbd1520dd18\"" Dec 13 14:17:32.915216 env[1849]: time="2024-12-13T14:17:32.915127325Z" level=info msg="StartContainer for \"6003a14fd7bc3b1943476823cbe380ac6109b0eb710f25be0bc9ebbd1520dd18\" returns successfully" Dec 13 14:17:36.157404 kubelet[3023]: E1213 14:17:36.157352 3023 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-214?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 14:17:37.913553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1872f4034d3540370d1f356c5185ab7ee2726cea610e8ff326e29d8448813a9-rootfs.mount: Deactivated successfully. Dec 13 14:17:37.930373 env[1849]: time="2024-12-13T14:17:37.930311131Z" level=info msg="shim disconnected" id=f1872f4034d3540370d1f356c5185ab7ee2726cea610e8ff326e29d8448813a9 Dec 13 14:17:37.931197 env[1849]: time="2024-12-13T14:17:37.931137307Z" level=warning msg="cleaning up after shim disconnected" id=f1872f4034d3540370d1f356c5185ab7ee2726cea610e8ff326e29d8448813a9 namespace=k8s.io Dec 13 14:17:37.931197 env[1849]: time="2024-12-13T14:17:37.931181839Z" level=info msg="cleaning up dead shim" Dec 13 14:17:37.945932 env[1849]: time="2024-12-13T14:17:37.945858101Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5960 runtime=io.containerd.runc.v2\n" Dec 13 14:17:38.761817 kubelet[3023]: I1213 14:17:38.761758 3023 scope.go:117] "RemoveContainer" containerID="f1872f4034d3540370d1f356c5185ab7ee2726cea610e8ff326e29d8448813a9" Dec 13 14:17:38.765604 env[1849]: time="2024-12-13T14:17:38.765551615Z" level=info msg="CreateContainer within sandbox \"a1992238b9401c6a8b98cbb18db765a2097d3e9227d4c88a1d9566e524bdc65a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 14:17:38.795885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3312867794.mount: Deactivated successfully. Dec 13 14:17:38.806558 env[1849]: time="2024-12-13T14:17:38.806478146Z" level=info msg="CreateContainer within sandbox \"a1992238b9401c6a8b98cbb18db765a2097d3e9227d4c88a1d9566e524bdc65a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"18702beef25199da657a9a1fa2f6c32cc9fc58804b81948e0dd72a76b4e1f18e\"" Dec 13 14:17:38.807335 env[1849]: time="2024-12-13T14:17:38.807218559Z" level=info msg="StartContainer for \"18702beef25199da657a9a1fa2f6c32cc9fc58804b81948e0dd72a76b4e1f18e\"" Dec 13 14:17:38.940105 env[1849]: time="2024-12-13T14:17:38.938539939Z" level=info msg="StartContainer for \"18702beef25199da657a9a1fa2f6c32cc9fc58804b81948e0dd72a76b4e1f18e\" returns successfully" Dec 13 14:17:46.158878 kubelet[3023]: E1213 14:17:46.158450 3023 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-214?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"