Feb 9 09:45:13.962249 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 9 09:45:13.962288 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:45:13.962311 kernel: efi: EFI v2.70 by EDK II Feb 9 09:45:13.962326 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71a8cf98 Feb 9 09:45:13.962339 kernel: ACPI: Early table checksum verification disabled Feb 9 09:45:13.962353 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 9 09:45:13.962369 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 9 09:45:13.962383 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 09:45:13.962397 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 9 09:45:13.962411 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 09:45:13.962429 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 9 09:45:13.962442 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 9 09:45:13.962456 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 9 09:45:13.962470 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 09:45:13.962486 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 9 09:45:13.962506 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 9 09:45:13.962520 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 9 09:45:13.962535 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 9 09:45:13.962549 kernel: printk: bootconsole [uart0] enabled Feb 9 09:45:13.962564 kernel: NUMA: Failed to initialise from firmware Feb 9 09:45:13.962579 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 09:45:13.962593 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Feb 9 09:45:13.962632 kernel: Zone ranges: Feb 9 09:45:13.962650 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 9 09:45:13.962701 kernel: DMA32 empty Feb 9 09:45:13.962723 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 9 09:45:13.962744 kernel: Movable zone start for each node Feb 9 09:45:13.962759 kernel: Early memory node ranges Feb 9 09:45:13.962774 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Feb 9 09:45:13.962789 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 9 09:45:13.962804 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 9 09:45:13.962818 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 9 09:45:13.962832 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 9 09:45:13.962847 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 9 09:45:13.962861 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 09:45:13.962876 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 9 09:45:13.962890 kernel: psci: probing for conduit method from ACPI. Feb 9 09:45:13.962905 kernel: psci: PSCIv1.0 detected in firmware. Feb 9 09:45:13.962923 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:45:13.962938 kernel: psci: Trusted OS migration not required Feb 9 09:45:13.962959 kernel: psci: SMC Calling Convention v1.1 Feb 9 09:45:13.962975 kernel: ACPI: SRAT not present Feb 9 09:45:13.962990 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:45:13.963010 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:45:13.963025 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 09:45:13.963040 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:45:13.963055 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:45:13.963070 kernel: CPU features: detected: Spectre-v2 Feb 9 09:45:13.963085 kernel: CPU features: detected: Spectre-v3a Feb 9 09:45:13.963100 kernel: CPU features: detected: Spectre-BHB Feb 9 09:45:13.963116 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:45:13.963131 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:45:13.963146 kernel: CPU features: detected: ARM erratum 1742098 Feb 9 09:45:13.963162 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 9 09:45:13.963181 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 9 09:45:13.963196 kernel: Policy zone: Normal Feb 9 09:45:13.963214 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:45:13.963230 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:45:13.963246 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:45:13.963261 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:45:13.963277 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:45:13.963312 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 9 09:45:13.963331 kernel: Memory: 3826316K/4030464K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 204148K reserved, 0K cma-reserved) Feb 9 09:45:13.963346 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 09:45:13.963367 kernel: trace event string verifier disabled Feb 9 09:45:13.963382 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:45:13.963398 kernel: rcu: RCU event tracing is enabled. Feb 9 09:45:13.963413 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 09:45:13.963429 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:45:13.963444 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:45:13.963460 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:45:13.963475 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 09:45:13.963490 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:45:13.963505 kernel: GICv3: 96 SPIs implemented Feb 9 09:45:13.963520 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:45:13.963535 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:45:13.963554 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:45:13.963569 kernel: GICv3: 16 PPIs implemented Feb 9 09:45:13.963584 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 9 09:45:13.963599 kernel: ACPI: SRAT not present Feb 9 09:45:13.965490 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 9 09:45:13.965508 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 09:45:13.965524 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Feb 9 09:45:13.965539 kernel: GICv3: using LPI property table @0x00000004000c0000 Feb 9 09:45:13.965554 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 9 09:45:13.965569 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Feb 9 09:45:13.965585 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 9 09:45:13.965623 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 9 09:45:13.965643 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 9 09:45:13.965659 kernel: Console: colour dummy device 80x25 Feb 9 09:45:13.965674 kernel: printk: console [tty1] enabled Feb 9 09:45:13.965690 kernel: ACPI: Core revision 20210730 Feb 9 09:45:13.965706 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 9 09:45:13.965721 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:45:13.965737 kernel: LSM: Security Framework initializing Feb 9 09:45:13.965752 kernel: SELinux: Initializing. Feb 9 09:45:13.965768 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:45:13.965789 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:45:13.965804 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:45:13.965820 kernel: Platform MSI: ITS@0x10080000 domain created Feb 9 09:45:13.965835 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 9 09:45:13.965850 kernel: Remapping and enabling EFI services. Feb 9 09:45:13.965866 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:45:13.965881 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:45:13.965897 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 9 09:45:13.965913 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Feb 9 09:45:13.965933 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 9 09:45:13.965948 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 09:45:13.965964 kernel: SMP: Total of 2 processors activated. Feb 9 09:45:13.965979 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:45:13.965995 kernel: CPU features: detected: 32-bit EL1 Support Feb 9 09:45:13.966010 kernel: CPU features: detected: CRC32 instructions Feb 9 09:45:13.966026 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:45:13.966041 kernel: alternatives: patching kernel code Feb 9 09:45:13.966056 kernel: devtmpfs: initialized Feb 9 09:45:13.966075 kernel: KASLR disabled due to lack of seed Feb 9 09:45:13.966091 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:45:13.966107 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 09:45:13.966133 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:45:13.966154 kernel: SMBIOS 3.0.0 present. Feb 9 09:45:13.966170 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 9 09:45:13.966186 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:45:13.966202 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:45:13.966218 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:45:13.966235 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:45:13.966251 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:45:13.966267 kernel: audit: type=2000 audit(0.249:1): state=initialized audit_enabled=0 res=1 Feb 9 09:45:13.966288 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:45:13.966304 kernel: cpuidle: using governor menu Feb 9 09:45:13.966320 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:45:13.966337 kernel: ASID allocator initialised with 32768 entries Feb 9 09:45:13.966353 kernel: ACPI: bus type PCI registered Feb 9 09:45:13.966373 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:45:13.966389 kernel: Serial: AMBA PL011 UART driver Feb 9 09:45:13.966405 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:45:13.966421 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:45:13.966438 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:45:13.966454 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:45:13.966470 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:45:13.966486 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:45:13.966502 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:45:13.966523 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:45:13.966539 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:45:13.966555 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:45:13.966571 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:45:13.966587 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:45:13.966618 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:45:13.966659 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:45:13.966677 kernel: ACPI: Interpreter enabled Feb 9 09:45:13.966693 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:45:13.966714 kernel: ACPI: MCFG table detected, 1 entries Feb 9 09:45:13.966731 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 9 09:45:13.967027 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 09:45:13.967229 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 09:45:13.967448 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 09:45:13.967670 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 9 09:45:13.967870 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 9 09:45:13.967898 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 9 09:45:13.967915 kernel: acpiphp: Slot [1] registered Feb 9 09:45:13.967932 kernel: acpiphp: Slot [2] registered Feb 9 09:45:13.967948 kernel: acpiphp: Slot [3] registered Feb 9 09:45:13.967964 kernel: acpiphp: Slot [4] registered Feb 9 09:45:13.967980 kernel: acpiphp: Slot [5] registered Feb 9 09:45:13.967996 kernel: acpiphp: Slot [6] registered Feb 9 09:45:13.968012 kernel: acpiphp: Slot [7] registered Feb 9 09:45:13.968028 kernel: acpiphp: Slot [8] registered Feb 9 09:45:13.968048 kernel: acpiphp: Slot [9] registered Feb 9 09:45:13.968065 kernel: acpiphp: Slot [10] registered Feb 9 09:45:13.968081 kernel: acpiphp: Slot [11] registered Feb 9 09:45:13.968097 kernel: acpiphp: Slot [12] registered Feb 9 09:45:13.968112 kernel: acpiphp: Slot [13] registered Feb 9 09:45:13.968128 kernel: acpiphp: Slot [14] registered Feb 9 09:45:13.968144 kernel: acpiphp: Slot [15] registered Feb 9 09:45:13.968160 kernel: acpiphp: Slot [16] registered Feb 9 09:45:13.968177 kernel: acpiphp: Slot [17] registered Feb 9 09:45:13.968193 kernel: acpiphp: Slot [18] registered Feb 9 09:45:13.968213 kernel: acpiphp: Slot [19] registered Feb 9 09:45:13.968229 kernel: acpiphp: Slot [20] registered Feb 9 09:45:13.968245 kernel: acpiphp: Slot [21] registered Feb 9 09:45:13.968261 kernel: acpiphp: Slot [22] registered Feb 9 09:45:13.968277 kernel: acpiphp: Slot [23] registered Feb 9 09:45:13.968293 kernel: acpiphp: Slot [24] registered Feb 9 09:45:13.968309 kernel: acpiphp: Slot [25] registered Feb 9 09:45:13.968325 kernel: acpiphp: Slot [26] registered Feb 9 09:45:13.968341 kernel: acpiphp: Slot [27] registered Feb 9 09:45:13.968361 kernel: acpiphp: Slot [28] registered Feb 9 09:45:13.968377 kernel: acpiphp: Slot [29] registered Feb 9 09:45:13.968393 kernel: acpiphp: Slot [30] registered Feb 9 09:45:13.968409 kernel: acpiphp: Slot [31] registered Feb 9 09:45:13.968425 kernel: PCI host bridge to bus 0000:00 Feb 9 09:45:13.968642 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 9 09:45:13.973956 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 09:45:13.974143 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 9 09:45:13.974342 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 9 09:45:13.974575 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 9 09:45:13.974879 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 9 09:45:13.975089 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 9 09:45:13.975336 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 09:45:13.975550 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 9 09:45:13.975821 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:45:13.976039 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 09:45:13.976242 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 9 09:45:13.976442 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 9 09:45:13.976691 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 9 09:45:13.976900 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:45:13.977097 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 9 09:45:13.977302 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 9 09:45:13.977502 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 9 09:45:13.977726 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 9 09:45:13.977933 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 9 09:45:13.978116 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 9 09:45:13.978295 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 09:45:13.978480 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 9 09:45:13.978508 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 09:45:13.978526 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 09:45:13.978543 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 09:45:13.978559 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 09:45:13.978576 kernel: iommu: Default domain type: Translated Feb 9 09:45:13.978592 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:45:13.978627 kernel: vgaarb: loaded Feb 9 09:45:13.978647 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:45:13.978664 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:45:13.978686 kernel: PTP clock support registered Feb 9 09:45:13.978702 kernel: Registered efivars operations Feb 9 09:45:13.978719 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:45:13.978735 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:45:13.978751 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:45:13.978767 kernel: pnp: PnP ACPI init Feb 9 09:45:13.978998 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 9 09:45:13.979023 kernel: pnp: PnP ACPI: found 1 devices Feb 9 09:45:13.979040 kernel: NET: Registered PF_INET protocol family Feb 9 09:45:13.979061 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:45:13.979078 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:45:13.979095 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:45:13.979111 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:45:13.979128 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:45:13.979144 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:45:13.979161 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:45:13.979177 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:45:13.979193 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:45:13.979214 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:45:13.979230 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 9 09:45:13.979246 kernel: kvm [1]: HYP mode not available Feb 9 09:45:13.979263 kernel: Initialise system trusted keyrings Feb 9 09:45:13.979279 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:45:13.979312 kernel: Key type asymmetric registered Feb 9 09:45:13.979330 kernel: Asymmetric key parser 'x509' registered Feb 9 09:45:13.979346 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:45:13.979363 kernel: io scheduler mq-deadline registered Feb 9 09:45:13.979385 kernel: io scheduler kyber registered Feb 9 09:45:13.979401 kernel: io scheduler bfq registered Feb 9 09:45:13.990699 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 9 09:45:13.990747 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 09:45:13.990766 kernel: ACPI: button: Power Button [PWRB] Feb 9 09:45:13.990783 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:45:13.990800 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 9 09:45:13.991044 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 9 09:45:13.991077 kernel: printk: console [ttyS0] disabled Feb 9 09:45:13.991095 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 9 09:45:13.991112 kernel: printk: console [ttyS0] enabled Feb 9 09:45:13.991128 kernel: printk: bootconsole [uart0] disabled Feb 9 09:45:13.991145 kernel: thunder_xcv, ver 1.0 Feb 9 09:45:13.991162 kernel: thunder_bgx, ver 1.0 Feb 9 09:45:13.991178 kernel: nicpf, ver 1.0 Feb 9 09:45:13.991194 kernel: nicvf, ver 1.0 Feb 9 09:45:13.991437 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:45:13.992720 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:45:13 UTC (1707471913) Feb 9 09:45:13.992750 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:45:13.992767 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:45:13.992784 kernel: Segment Routing with IPv6 Feb 9 09:45:13.992801 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:45:13.992818 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:45:13.992834 kernel: Key type dns_resolver registered Feb 9 09:45:13.992850 kernel: registered taskstats version 1 Feb 9 09:45:13.992872 kernel: Loading compiled-in X.509 certificates Feb 9 09:45:13.992889 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:45:13.992905 kernel: Key type .fscrypt registered Feb 9 09:45:13.992922 kernel: Key type fscrypt-provisioning registered Feb 9 09:45:13.992937 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:45:13.992954 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:45:13.992970 kernel: ima: No architecture policies found Feb 9 09:45:13.992986 kernel: Freeing unused kernel memory: 34688K Feb 9 09:45:13.993002 kernel: Run /init as init process Feb 9 09:45:13.993022 kernel: with arguments: Feb 9 09:45:13.993039 kernel: /init Feb 9 09:45:13.993054 kernel: with environment: Feb 9 09:45:13.993070 kernel: HOME=/ Feb 9 09:45:13.993086 kernel: TERM=linux Feb 9 09:45:13.993102 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:45:13.993123 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:45:13.993143 systemd[1]: Detected virtualization amazon. Feb 9 09:45:13.993166 systemd[1]: Detected architecture arm64. Feb 9 09:45:13.993183 systemd[1]: Running in initrd. Feb 9 09:45:13.993200 systemd[1]: No hostname configured, using default hostname. Feb 9 09:45:13.993217 systemd[1]: Hostname set to . Feb 9 09:45:13.993235 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:45:13.993253 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:45:13.993270 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:45:13.993287 systemd[1]: Reached target cryptsetup.target. Feb 9 09:45:13.993308 systemd[1]: Reached target paths.target. Feb 9 09:45:13.993326 systemd[1]: Reached target slices.target. Feb 9 09:45:13.993343 systemd[1]: Reached target swap.target. Feb 9 09:45:13.993360 systemd[1]: Reached target timers.target. Feb 9 09:45:13.993378 systemd[1]: Listening on iscsid.socket. Feb 9 09:45:13.993396 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:45:13.993413 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:45:13.993431 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:45:13.993453 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:45:13.993470 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:45:13.993488 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:45:13.993505 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:45:13.993523 systemd[1]: Reached target sockets.target. Feb 9 09:45:13.993540 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:45:13.993557 systemd[1]: Finished network-cleanup.service. Feb 9 09:45:13.993575 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:45:13.993592 systemd[1]: Starting systemd-journald.service... Feb 9 09:45:13.994665 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:45:13.994688 systemd[1]: Starting systemd-resolved.service... Feb 9 09:45:13.994707 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:45:13.994725 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:45:13.994743 kernel: audit: type=1130 audit(1707471913.978:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:13.994761 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:45:13.994783 systemd-journald[308]: Journal started Feb 9 09:45:13.994874 systemd-journald[308]: Runtime Journal (/run/log/journal/ec27af2446770d899455b0743b52e912) is 8.0M, max 75.4M, 67.4M free. Feb 9 09:45:13.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:13.960845 systemd-modules-load[309]: Inserted module 'overlay' Feb 9 09:45:14.011638 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:45:14.015105 systemd-modules-load[309]: Inserted module 'br_netfilter' Feb 9 09:45:14.026537 kernel: Bridge firewalling registered Feb 9 09:45:14.026571 kernel: audit: type=1130 audit(1707471914.014:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.026596 systemd[1]: Started systemd-journald.service. Feb 9 09:45:14.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.043011 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:45:14.054995 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:45:14.067060 kernel: audit: type=1130 audit(1707471914.040:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.067103 kernel: SCSI subsystem initialized Feb 9 09:45:14.067127 kernel: audit: type=1130 audit(1707471914.051:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.073372 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:45:14.094722 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:45:14.094790 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:45:14.099208 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:45:14.102044 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:45:14.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.114698 systemd-resolved[310]: Positive Trust Anchors: Feb 9 09:45:14.114713 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:45:14.114774 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:45:14.151470 kernel: audit: type=1130 audit(1707471914.107:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.150663 systemd-modules-load[309]: Inserted module 'dm_multipath' Feb 9 09:45:14.153846 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:45:14.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.158288 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:45:14.183568 kernel: audit: type=1130 audit(1707471914.156:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.183616 kernel: audit: type=1130 audit(1707471914.171:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.184824 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:45:14.197894 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:45:14.223547 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:45:14.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.239134 kernel: audit: type=1130 audit(1707471914.228:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.241449 dracut-cmdline[327]: dracut-dracut-053 Feb 9 09:45:14.246412 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:45:14.373640 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:45:14.384639 kernel: iscsi: registered transport (tcp) Feb 9 09:45:14.409076 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:45:14.409146 kernel: QLogic iSCSI HBA Driver Feb 9 09:45:14.591636 kernel: random: crng init done Feb 9 09:45:14.591591 systemd-resolved[310]: Defaulting to hostname 'linux'. Feb 9 09:45:14.594903 systemd[1]: Started systemd-resolved.service. Feb 9 09:45:14.611553 systemd[1]: Reached target nss-lookup.target. Feb 9 09:45:14.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.622126 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:45:14.625097 kernel: audit: type=1130 audit(1707471914.609:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.626644 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:45:14.691646 kernel: raid6: neonx8 gen() 6408 MB/s Feb 9 09:45:14.709643 kernel: raid6: neonx8 xor() 4546 MB/s Feb 9 09:45:14.727640 kernel: raid6: neonx4 gen() 6587 MB/s Feb 9 09:45:14.745636 kernel: raid6: neonx4 xor() 4689 MB/s Feb 9 09:45:14.763642 kernel: raid6: neonx2 gen() 5805 MB/s Feb 9 09:45:14.781636 kernel: raid6: neonx2 xor() 4394 MB/s Feb 9 09:45:14.799641 kernel: raid6: neonx1 gen() 4519 MB/s Feb 9 09:45:14.817637 kernel: raid6: neonx1 xor() 3588 MB/s Feb 9 09:45:14.835643 kernel: raid6: int64x8 gen() 3452 MB/s Feb 9 09:45:14.853636 kernel: raid6: int64x8 xor() 2044 MB/s Feb 9 09:45:14.871643 kernel: raid6: int64x4 gen() 3859 MB/s Feb 9 09:45:14.889635 kernel: raid6: int64x4 xor() 2164 MB/s Feb 9 09:45:14.907641 kernel: raid6: int64x2 gen() 3624 MB/s Feb 9 09:45:14.925636 kernel: raid6: int64x2 xor() 1919 MB/s Feb 9 09:45:14.943635 kernel: raid6: int64x1 gen() 2768 MB/s Feb 9 09:45:14.963119 kernel: raid6: int64x1 xor() 1436 MB/s Feb 9 09:45:14.963162 kernel: raid6: using algorithm neonx4 gen() 6587 MB/s Feb 9 09:45:14.963185 kernel: raid6: .... xor() 4689 MB/s, rmw enabled Feb 9 09:45:14.964928 kernel: raid6: using neon recovery algorithm Feb 9 09:45:14.983641 kernel: xor: measuring software checksum speed Feb 9 09:45:14.988151 kernel: 8regs : 9336 MB/sec Feb 9 09:45:14.988181 kernel: 32regs : 11107 MB/sec Feb 9 09:45:14.992625 kernel: arm64_neon : 9658 MB/sec Feb 9 09:45:14.992655 kernel: xor: using function: 32regs (11107 MB/sec) Feb 9 09:45:15.082648 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:45:15.100642 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:45:15.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:15.105000 audit: BPF prog-id=7 op=LOAD Feb 9 09:45:15.105000 audit: BPF prog-id=8 op=LOAD Feb 9 09:45:15.107638 systemd[1]: Starting systemd-udevd.service... Feb 9 09:45:15.136749 systemd-udevd[507]: Using default interface naming scheme 'v252'. Feb 9 09:45:15.147540 systemd[1]: Started systemd-udevd.service. Feb 9 09:45:15.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:15.155913 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:45:15.182719 dracut-pre-trigger[518]: rd.md=0: removing MD RAID activation Feb 9 09:45:15.242486 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:45:15.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:15.245695 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:45:15.350591 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:45:15.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:15.481508 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 09:45:15.481583 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 9 09:45:15.498066 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 9 09:45:15.498146 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 09:45:15.498436 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 09:45:15.500678 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 09:45:15.510630 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:4f:72:8d:07:b3 Feb 9 09:45:15.510907 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 09:45:15.517710 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 09:45:15.517761 kernel: GPT:9289727 != 16777215 Feb 9 09:45:15.517784 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 09:45:15.519855 kernel: GPT:9289727 != 16777215 Feb 9 09:45:15.521137 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 09:45:15.524485 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:45:15.527965 (udev-worker)[564]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:45:15.599643 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (560) Feb 9 09:45:15.638986 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:45:15.710172 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:45:15.722991 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:45:15.725032 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:45:15.748171 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:45:15.752414 systemd[1]: Starting disk-uuid.service... Feb 9 09:45:15.764768 disk-uuid[667]: Primary Header is updated. Feb 9 09:45:15.764768 disk-uuid[667]: Secondary Entries is updated. Feb 9 09:45:15.764768 disk-uuid[667]: Secondary Header is updated. Feb 9 09:45:15.774646 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:45:15.783650 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:45:15.793642 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:45:16.789506 disk-uuid[668]: The operation has completed successfully. Feb 9 09:45:16.791762 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:45:16.950462 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:45:16.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:16.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:16.950695 systemd[1]: Finished disk-uuid.service. Feb 9 09:45:16.974119 systemd[1]: Starting verity-setup.service... Feb 9 09:45:17.008350 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:45:17.089911 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:45:17.094222 systemd[1]: Finished verity-setup.service. Feb 9 09:45:17.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.099075 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:45:17.184464 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:45:17.184345 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:45:17.186136 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:45:17.192504 systemd[1]: Starting ignition-setup.service... Feb 9 09:45:17.197866 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:45:17.219335 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:45:17.219397 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 09:45:17.222071 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 09:45:17.231952 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 09:45:17.248757 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:45:17.279544 systemd[1]: Finished ignition-setup.service. Feb 9 09:45:17.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.284056 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:45:17.352288 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:45:17.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.354000 audit: BPF prog-id=9 op=LOAD Feb 9 09:45:17.358597 systemd[1]: Starting systemd-networkd.service... Feb 9 09:45:17.408269 systemd-networkd[1180]: lo: Link UP Feb 9 09:45:17.408294 systemd-networkd[1180]: lo: Gained carrier Feb 9 09:45:17.412091 systemd-networkd[1180]: Enumeration completed Feb 9 09:45:17.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.412808 systemd-networkd[1180]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:45:17.412853 systemd[1]: Started systemd-networkd.service. Feb 9 09:45:17.414984 systemd[1]: Reached target network.target. Feb 9 09:45:17.419348 systemd-networkd[1180]: eth0: Link UP Feb 9 09:45:17.419362 systemd-networkd[1180]: eth0: Gained carrier Feb 9 09:45:17.440817 systemd[1]: Starting iscsiuio.service... Feb 9 09:45:17.454409 systemd[1]: Started iscsiuio.service. Feb 9 09:45:17.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.458177 systemd-networkd[1180]: eth0: DHCPv4 address 172.31.16.76/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 09:45:17.461268 systemd[1]: Starting iscsid.service... Feb 9 09:45:17.475628 systemd[1]: Started iscsid.service. Feb 9 09:45:17.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.480788 iscsid[1185]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:45:17.480788 iscsid[1185]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 09:45:17.480788 iscsid[1185]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:45:17.480788 iscsid[1185]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:45:17.480788 iscsid[1185]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:45:17.480788 iscsid[1185]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:45:17.480788 iscsid[1185]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:45:17.507248 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:45:17.534631 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:45:17.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.537427 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:45:17.539527 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:45:17.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.540278 systemd[1]: Reached target remote-fs.target. Feb 9 09:45:17.542079 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:45:17.568761 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:45:17.889573 ignition[1126]: Ignition 2.14.0 Feb 9 09:45:17.889600 ignition[1126]: Stage: fetch-offline Feb 9 09:45:17.889984 ignition[1126]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:45:17.890047 ignition[1126]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:45:17.911672 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:45:17.914583 ignition[1126]: Ignition finished successfully Feb 9 09:45:17.917161 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:45:17.928041 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 09:45:17.928130 kernel: audit: type=1130 audit(1707471917.919:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.922242 systemd[1]: Starting ignition-fetch.service... Feb 9 09:45:17.941009 ignition[1204]: Ignition 2.14.0 Feb 9 09:45:17.941535 ignition[1204]: Stage: fetch Feb 9 09:45:17.941925 ignition[1204]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:45:17.941986 ignition[1204]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:45:17.957258 ignition[1204]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:45:17.959734 ignition[1204]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:45:17.979965 ignition[1204]: INFO : PUT result: OK Feb 9 09:45:17.983730 ignition[1204]: DEBUG : parsed url from cmdline: "" Feb 9 09:45:17.983730 ignition[1204]: INFO : no config URL provided Feb 9 09:45:17.983730 ignition[1204]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:45:17.989952 ignition[1204]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 09:45:17.989952 ignition[1204]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:45:17.989952 ignition[1204]: INFO : PUT result: OK Feb 9 09:45:17.996081 ignition[1204]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 09:45:17.999192 ignition[1204]: INFO : GET result: OK Feb 9 09:45:18.000782 ignition[1204]: DEBUG : parsing config with SHA512: 90ae4dd299ccd24e2aa2052f484da0ffb47405d2db144bce35bf6a90a794e047f7bdeba604e71deb125735f263350b3cbc8391fa537e98601e54e0a71eecc3d8 Feb 9 09:45:18.079324 unknown[1204]: fetched base config from "system" Feb 9 09:45:18.079353 unknown[1204]: fetched base config from "system" Feb 9 09:45:18.079369 unknown[1204]: fetched user config from "aws" Feb 9 09:45:18.085523 ignition[1204]: fetch: fetch complete Feb 9 09:45:18.085551 ignition[1204]: fetch: fetch passed Feb 9 09:45:18.085674 ignition[1204]: Ignition finished successfully Feb 9 09:45:18.092259 systemd[1]: Finished ignition-fetch.service. Feb 9 09:45:18.095386 systemd[1]: Starting ignition-kargs.service... Feb 9 09:45:18.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.114699 kernel: audit: type=1130 audit(1707471918.090:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.127435 ignition[1210]: Ignition 2.14.0 Feb 9 09:45:18.127464 ignition[1210]: Stage: kargs Feb 9 09:45:18.127785 ignition[1210]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:45:18.127843 ignition[1210]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:45:18.142948 ignition[1210]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:45:18.145678 ignition[1210]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:45:18.150768 ignition[1210]: INFO : PUT result: OK Feb 9 09:45:18.157024 ignition[1210]: kargs: kargs passed Feb 9 09:45:18.157324 ignition[1210]: Ignition finished successfully Feb 9 09:45:18.161486 systemd[1]: Finished ignition-kargs.service. Feb 9 09:45:18.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.165263 systemd[1]: Starting ignition-disks.service... Feb 9 09:45:18.175636 kernel: audit: type=1130 audit(1707471918.161:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.180920 ignition[1216]: Ignition 2.14.0 Feb 9 09:45:18.180946 ignition[1216]: Stage: disks Feb 9 09:45:18.181253 ignition[1216]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:45:18.181311 ignition[1216]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:45:18.194105 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:45:18.196674 ignition[1216]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:45:18.199796 ignition[1216]: INFO : PUT result: OK Feb 9 09:45:18.205671 ignition[1216]: disks: disks passed Feb 9 09:45:18.205777 ignition[1216]: Ignition finished successfully Feb 9 09:45:18.207386 systemd[1]: Finished ignition-disks.service. Feb 9 09:45:18.213344 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:45:18.231373 kernel: audit: type=1130 audit(1707471918.211:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.215055 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:45:18.216667 systemd[1]: Reached target local-fs.target. Feb 9 09:45:18.218232 systemd[1]: Reached target sysinit.target. Feb 9 09:45:18.219851 systemd[1]: Reached target basic.target. Feb 9 09:45:18.238480 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:45:18.284247 systemd-fsck[1224]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 09:45:18.294458 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:45:18.306467 kernel: audit: type=1130 audit(1707471918.295:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.297634 systemd[1]: Mounting sysroot.mount... Feb 9 09:45:18.321659 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:45:18.323839 systemd[1]: Mounted sysroot.mount. Feb 9 09:45:18.327584 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:45:18.342843 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:45:18.345068 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 09:45:18.345146 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:45:18.345197 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:45:18.361096 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:45:18.378959 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:45:18.395662 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:45:18.407674 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1241) Feb 9 09:45:18.413926 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:45:18.414000 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 09:45:18.416144 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 09:45:18.422334 initrd-setup-root[1246]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:45:18.424931 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 09:45:18.431385 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:45:18.449187 initrd-setup-root[1272]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:45:18.457222 initrd-setup-root[1280]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:45:18.464943 initrd-setup-root[1288]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:45:18.653600 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:45:18.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.658941 systemd[1]: Starting ignition-mount.service... Feb 9 09:45:18.670843 kernel: audit: type=1130 audit(1707471918.655:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.666906 systemd[1]: Starting sysroot-boot.service... Feb 9 09:45:18.683070 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 09:45:18.683241 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 09:45:18.716952 ignition[1307]: INFO : Ignition 2.14.0 Feb 9 09:45:18.716952 ignition[1307]: INFO : Stage: mount Feb 9 09:45:18.723338 ignition[1307]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:45:18.723338 ignition[1307]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:45:18.733719 systemd[1]: Finished sysroot-boot.service. Feb 9 09:45:18.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.744658 kernel: audit: type=1130 audit(1707471918.735:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.750749 ignition[1307]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:45:18.753424 ignition[1307]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:45:18.756563 ignition[1307]: INFO : PUT result: OK Feb 9 09:45:18.762177 ignition[1307]: INFO : mount: mount passed Feb 9 09:45:18.763764 ignition[1307]: INFO : Ignition finished successfully Feb 9 09:45:18.772844 systemd[1]: Finished ignition-mount.service. Feb 9 09:45:18.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.777126 systemd[1]: Starting ignition-files.service... Feb 9 09:45:18.787573 kernel: audit: type=1130 audit(1707471918.774:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.793354 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:45:18.810648 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1316) Feb 9 09:45:18.816851 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:45:18.816893 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 09:45:18.816918 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 09:45:18.825643 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 09:45:18.830048 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:45:18.848931 ignition[1335]: INFO : Ignition 2.14.0 Feb 9 09:45:18.848931 ignition[1335]: INFO : Stage: files Feb 9 09:45:18.852211 ignition[1335]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:45:18.852211 ignition[1335]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:45:18.865099 ignition[1335]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:45:18.867564 ignition[1335]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:45:18.870636 ignition[1335]: INFO : PUT result: OK Feb 9 09:45:18.877727 ignition[1335]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:45:18.881796 ignition[1335]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:45:18.881796 ignition[1335]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:45:18.912636 ignition[1335]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:45:18.915655 ignition[1335]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:45:18.919403 unknown[1335]: wrote ssh authorized keys file for user: core Feb 9 09:45:18.921629 ignition[1335]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:45:18.925242 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:45:18.928812 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:45:18.932134 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:45:18.935889 ignition[1335]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 09:45:19.068451 ignition[1335]: INFO : GET result: OK Feb 9 09:45:19.074768 systemd-networkd[1180]: eth0: Gained IPv6LL Feb 9 09:45:19.180582 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:45:19.184534 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:45:19.188317 ignition[1335]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:45:19.690193 ignition[1335]: INFO : GET result: OK Feb 9 09:45:19.976010 ignition[1335]: DEBUG : file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 09:45:19.981157 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:45:19.981157 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:45:19.981157 ignition[1335]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 09:45:20.362097 ignition[1335]: INFO : GET result: OK Feb 9 09:45:20.770338 ignition[1335]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 09:45:20.774950 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:45:20.774950 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:45:20.774950 ignition[1335]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:45:21.065424 ignition[1335]: INFO : GET result: OK Feb 9 09:45:35.248565 ignition[1335]: DEBUG : file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 09:45:35.253709 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:45:35.253709 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:45:35.253709 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:45:35.253709 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 09:45:35.253709 ignition[1335]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:45:35.278829 ignition[1335]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4180278042" Feb 9 09:45:35.285485 ignition[1335]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4180278042": device or resource busy Feb 9 09:45:35.285485 ignition[1335]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4180278042", trying btrfs: device or resource busy Feb 9 09:45:35.285485 ignition[1335]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4180278042" Feb 9 09:45:35.294869 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1338) Feb 9 09:45:35.295023 ignition[1335]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4180278042" Feb 9 09:45:35.299213 ignition[1335]: INFO : op(3): [started] unmounting "/mnt/oem4180278042" Feb 9 09:45:35.301581 ignition[1335]: INFO : op(3): [finished] unmounting "/mnt/oem4180278042" Feb 9 09:45:35.304166 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 09:45:35.304166 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:45:35.304166 ignition[1335]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:45:35.319379 systemd[1]: mnt-oem4180278042.mount: Deactivated successfully. Feb 9 09:45:35.371577 ignition[1335]: INFO : GET result: OK Feb 9 09:45:35.892432 ignition[1335]: DEBUG : file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 09:45:35.897032 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:45:35.897032 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:45:35.897032 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:45:35.897032 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:45:35.897032 ignition[1335]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 09:45:35.964158 ignition[1335]: INFO : GET result: OK Feb 9 09:45:36.556262 ignition[1335]: DEBUG : file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 09:45:36.561772 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:45:36.561772 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:45:36.561772 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:45:36.561772 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:45:36.561772 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:45:36.561772 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:45:36.561772 ignition[1335]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 09:45:36.975106 ignition[1335]: INFO : GET result: OK Feb 9 09:45:37.098672 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:45:37.104190 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:45:37.104190 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:45:37.104190 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:45:37.104190 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:45:37.104190 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:45:37.104190 ignition[1335]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:45:37.132861 ignition[1335]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1046472004" Feb 9 09:45:37.135630 ignition[1335]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1046472004": device or resource busy Feb 9 09:45:37.135630 ignition[1335]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1046472004", trying btrfs: device or resource busy Feb 9 09:45:37.135630 ignition[1335]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1046472004" Feb 9 09:45:37.147381 ignition[1335]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1046472004" Feb 9 09:45:37.150272 ignition[1335]: INFO : op(6): [started] unmounting "/mnt/oem1046472004" Feb 9 09:45:37.152552 ignition[1335]: INFO : op(6): [finished] unmounting "/mnt/oem1046472004" Feb 9 09:45:37.155487 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:45:37.155487 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 09:45:37.155487 ignition[1335]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:45:37.172044 systemd[1]: mnt-oem1046472004.mount: Deactivated successfully. Feb 9 09:45:37.188666 ignition[1335]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4006978556" Feb 9 09:45:37.192043 ignition[1335]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4006978556": device or resource busy Feb 9 09:45:37.192043 ignition[1335]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4006978556", trying btrfs: device or resource busy Feb 9 09:45:37.192043 ignition[1335]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4006978556" Feb 9 09:45:37.206062 ignition[1335]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4006978556" Feb 9 09:45:37.206062 ignition[1335]: INFO : op(9): [started] unmounting "/mnt/oem4006978556" Feb 9 09:45:37.206062 ignition[1335]: INFO : op(9): [finished] unmounting "/mnt/oem4006978556" Feb 9 09:45:37.206062 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 09:45:37.206062 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 09:45:37.206062 ignition[1335]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:45:37.237829 ignition[1335]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2955597958" Feb 9 09:45:37.242476 ignition[1335]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2955597958": device or resource busy Feb 9 09:45:37.242476 ignition[1335]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2955597958", trying btrfs: device or resource busy Feb 9 09:45:37.242476 ignition[1335]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2955597958" Feb 9 09:45:37.264694 ignition[1335]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2955597958" Feb 9 09:45:37.264694 ignition[1335]: INFO : op(c): [started] unmounting "/mnt/oem2955597958" Feb 9 09:45:37.263285 systemd[1]: mnt-oem2955597958.mount: Deactivated successfully. Feb 9 09:45:37.271799 ignition[1335]: INFO : op(c): [finished] unmounting "/mnt/oem2955597958" Feb 9 09:45:37.281767 ignition[1335]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 09:45:37.285569 ignition[1335]: INFO : files: op(15): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 09:45:37.285569 ignition[1335]: INFO : files: op(15): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 09:45:37.285569 ignition[1335]: INFO : files: op(16): [started] processing unit "amazon-ssm-agent.service" Feb 9 09:45:37.285569 ignition[1335]: INFO : files: op(16): op(17): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 09:45:37.297715 ignition[1335]: INFO : files: op(16): op(17): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 09:45:37.297715 ignition[1335]: INFO : files: op(16): [finished] processing unit "amazon-ssm-agent.service" Feb 9 09:45:37.297715 ignition[1335]: INFO : files: op(18): [started] processing unit "nvidia.service" Feb 9 09:45:37.306524 ignition[1335]: INFO : files: op(18): [finished] processing unit "nvidia.service" Feb 9 09:45:37.306524 ignition[1335]: INFO : files: op(19): [started] processing unit "prepare-critools.service" Feb 9 09:45:37.306524 ignition[1335]: INFO : files: op(19): op(1a): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:45:37.315497 ignition[1335]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:45:37.315497 ignition[1335]: INFO : files: op(19): [finished] processing unit "prepare-critools.service" Feb 9 09:45:37.321985 ignition[1335]: INFO : files: op(1b): [started] processing unit "prepare-helm.service" Feb 9 09:45:37.321985 ignition[1335]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:45:37.328276 ignition[1335]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:45:37.328276 ignition[1335]: INFO : files: op(1b): [finished] processing unit "prepare-helm.service" Feb 9 09:45:37.328276 ignition[1335]: INFO : files: op(1d): [started] processing unit "containerd.service" Feb 9 09:45:37.337089 ignition[1335]: INFO : files: op(1d): op(1e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:45:37.341746 ignition[1335]: INFO : files: op(1d): op(1e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:45:37.341746 ignition[1335]: INFO : files: op(1d): [finished] processing unit "containerd.service" Feb 9 09:45:37.348637 ignition[1335]: INFO : files: op(1f): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:45:37.348637 ignition[1335]: INFO : files: op(1f): op(20): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:45:37.355371 ignition[1335]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:45:37.355371 ignition[1335]: INFO : files: op(1f): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:45:37.355371 ignition[1335]: INFO : files: op(21): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 09:45:37.364969 ignition[1335]: INFO : files: op(21): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 09:45:37.364969 ignition[1335]: INFO : files: op(22): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 09:45:37.364969 ignition[1335]: INFO : files: op(22): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 09:45:37.364969 ignition[1335]: INFO : files: op(23): [started] setting preset to enabled for "nvidia.service" Feb 9 09:45:37.382787 ignition[1335]: INFO : files: op(23): [finished] setting preset to enabled for "nvidia.service" Feb 9 09:45:37.382787 ignition[1335]: INFO : files: op(24): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:45:37.382787 ignition[1335]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:45:37.382787 ignition[1335]: INFO : files: op(25): [started] setting preset to enabled for "prepare-helm.service" Feb 9 09:45:37.382787 ignition[1335]: INFO : files: op(25): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 09:45:37.382787 ignition[1335]: INFO : files: op(26): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:45:37.382787 ignition[1335]: INFO : files: op(26): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:45:37.435938 kernel: audit: type=1130 audit(1707471937.405:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.404843 systemd[1]: Finished ignition-files.service. Feb 9 09:45:37.441640 ignition[1335]: INFO : files: createResultFile: createFiles: op(27): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:45:37.441640 ignition[1335]: INFO : files: createResultFile: createFiles: op(27): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:45:37.441640 ignition[1335]: INFO : files: files passed Feb 9 09:45:37.441640 ignition[1335]: INFO : Ignition finished successfully Feb 9 09:45:37.482559 kernel: audit: type=1130 audit(1707471937.447:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.421022 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:45:37.435984 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:45:37.488589 initrd-setup-root-after-ignition[1358]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:45:37.437413 systemd[1]: Starting ignition-quench.service... Feb 9 09:45:37.446841 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:45:37.503995 kernel: audit: type=1130 audit(1707471937.493:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.456787 systemd[1]: Reached target ignition-complete.target. Feb 9 09:45:37.512982 kernel: audit: type=1131 audit(1707471937.502:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.475580 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:45:37.491903 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:45:37.493521 systemd[1]: Finished ignition-quench.service. Feb 9 09:45:37.535252 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:45:37.537492 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:45:37.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.541159 systemd[1]: Reached target initrd-fs.target. Feb 9 09:45:37.556495 kernel: audit: type=1130 audit(1707471937.539:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.556530 kernel: audit: type=1131 audit(1707471937.539:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.558219 systemd[1]: Reached target initrd.target. Feb 9 09:45:37.559806 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:45:37.565154 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:45:37.590512 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:45:37.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.601664 kernel: audit: type=1130 audit(1707471937.591:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.602169 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:45:37.621478 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:45:37.624404 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:45:37.630183 systemd[1]: Stopped target timers.target. Feb 9 09:45:37.632940 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:45:37.633231 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:45:37.697978 kernel: audit: type=1131 audit(1707471937.633:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.698026 kernel: audit: type=1131 audit(1707471937.657:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.698054 kernel: audit: type=1131 audit(1707471937.663:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.635329 systemd[1]: Stopped target initrd.target. Feb 9 09:45:37.700071 ignition[1374]: INFO : Ignition 2.14.0 Feb 9 09:45:37.700071 ignition[1374]: INFO : Stage: umount Feb 9 09:45:37.700071 ignition[1374]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:45:37.700071 ignition[1374]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:45:37.637127 systemd[1]: Stopped target basic.target. Feb 9 09:45:37.638880 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:45:37.640894 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:45:37.642908 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:45:37.644921 systemd[1]: Stopped target remote-fs.target. Feb 9 09:45:37.646802 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:45:37.648759 systemd[1]: Stopped target sysinit.target. Feb 9 09:45:37.650770 systemd[1]: Stopped target local-fs.target. Feb 9 09:45:37.652804 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:45:37.654911 systemd[1]: Stopped target swap.target. Feb 9 09:45:37.656674 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:45:37.657004 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:45:37.659821 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:45:37.662113 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:45:37.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.662446 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:45:37.665724 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:45:37.666062 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:45:37.668733 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:45:37.669052 systemd[1]: Stopped ignition-files.service. Feb 9 09:45:37.673282 systemd[1]: Stopping ignition-mount.service... Feb 9 09:45:37.712334 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:45:37.715880 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:45:37.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.754936 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:45:37.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.756601 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:45:37.756981 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:45:37.759503 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:45:37.759848 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:45:37.767634 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:45:37.767890 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:45:37.789851 ignition[1374]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:45:37.793318 ignition[1374]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:45:37.796790 ignition[1374]: INFO : PUT result: OK Feb 9 09:45:37.803481 ignition[1374]: INFO : umount: umount passed Feb 9 09:45:37.805346 ignition[1374]: INFO : Ignition finished successfully Feb 9 09:45:37.811469 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:45:37.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.811744 systemd[1]: Stopped ignition-mount.service. Feb 9 09:45:37.813969 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:45:37.814090 systemd[1]: Stopped ignition-disks.service. Feb 9 09:45:37.816055 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:45:37.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.816164 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:45:37.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.818118 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 09:45:37.818223 systemd[1]: Stopped ignition-fetch.service. Feb 9 09:45:37.831591 systemd[1]: Stopped target network.target. Feb 9 09:45:37.833438 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:45:37.833565 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:45:37.835688 systemd[1]: Stopped target paths.target. Feb 9 09:45:37.837331 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:45:37.844331 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:45:37.850104 systemd[1]: Stopped target slices.target. Feb 9 09:45:37.851725 systemd[1]: Stopped target sockets.target. Feb 9 09:45:37.853519 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:45:37.853588 systemd[1]: Closed iscsid.socket. Feb 9 09:45:37.855178 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:45:37.855270 systemd[1]: Closed iscsiuio.socket. Feb 9 09:45:37.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.856855 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:45:37.856974 systemd[1]: Stopped ignition-setup.service. Feb 9 09:45:37.859855 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:45:37.862483 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:45:37.864716 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:45:37.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.864921 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:45:37.868132 systemd-networkd[1180]: eth0: DHCPv6 lease lost Feb 9 09:45:37.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.870491 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:45:37.918000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:45:37.870642 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:45:37.898727 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:45:37.899003 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:45:37.913336 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:45:37.913556 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:45:37.927000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:45:37.928757 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:45:37.932459 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:45:37.936926 systemd[1]: Stopping network-cleanup.service... Feb 9 09:45:37.940753 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:45:37.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.941714 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:45:37.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.944250 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:45:37.944367 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:45:37.948289 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:45:37.948397 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:45:37.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.949045 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:45:37.958947 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:45:37.962022 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:45:37.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.966866 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:45:37.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:38.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:38.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:37.967108 systemd[1]: Stopped network-cleanup.service. Feb 9 09:45:37.969693 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:45:37.969797 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:45:37.971796 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:45:37.972302 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:45:37.975597 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:45:37.975915 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:45:37.978955 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:45:37.979068 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:45:37.980943 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:45:38.045000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:45:38.045000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:45:38.050000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:45:37.981050 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:45:38.051000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:45:38.051000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:45:37.985031 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:45:37.997230 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:45:37.997367 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:45:38.004815 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:45:38.005034 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:45:38.081079 systemd-journald[308]: Received SIGTERM from PID 1 (systemd). Feb 9 09:45:38.081148 iscsid[1185]: iscsid shutting down. Feb 9 09:45:38.007354 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:45:38.012298 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:45:38.042688 systemd[1]: Switching root. Feb 9 09:45:38.094117 systemd-journald[308]: Journal stopped Feb 9 09:45:42.447530 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:45:42.447658 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:45:42.447693 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:45:42.447725 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:45:42.447756 kernel: SELinux: policy capability open_perms=1 Feb 9 09:45:42.447787 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:45:42.447818 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:45:42.447852 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:45:42.447882 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:45:42.447912 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:45:42.447943 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:45:42.447974 systemd[1]: Successfully loaded SELinux policy in 66.482ms. Feb 9 09:45:42.448032 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.503ms. Feb 9 09:45:42.448067 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:45:42.448099 systemd[1]: Detected virtualization amazon. Feb 9 09:45:42.448130 systemd[1]: Detected architecture arm64. Feb 9 09:45:42.448168 systemd[1]: Detected first boot. Feb 9 09:45:42.448200 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:45:42.448232 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:45:42.448263 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:45:42.448296 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:45:42.448330 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:45:42.448363 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:45:42.448405 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:45:42.448437 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:45:42.448468 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:45:42.448500 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 09:45:42.448531 systemd[1]: Created slice system-getty.slice. Feb 9 09:45:42.448562 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:45:42.448593 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:45:42.448641 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:45:42.448676 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:45:42.448709 systemd[1]: Created slice user.slice. Feb 9 09:45:42.448741 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:45:42.448770 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:45:42.448799 systemd[1]: Set up automount boot.automount. Feb 9 09:45:42.448828 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:45:42.448862 systemd[1]: Reached target integritysetup.target. Feb 9 09:45:42.448896 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:45:42.448930 systemd[1]: Reached target remote-fs.target. Feb 9 09:45:42.448962 systemd[1]: Reached target slices.target. Feb 9 09:45:42.448995 systemd[1]: Reached target swap.target. Feb 9 09:45:42.449027 systemd[1]: Reached target torcx.target. Feb 9 09:45:42.449059 systemd[1]: Reached target veritysetup.target. Feb 9 09:45:42.449089 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:45:42.449128 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:45:42.449158 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:45:42.449188 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:45:42.449217 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:45:42.449251 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:45:42.449282 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:45:42.449310 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:45:42.449339 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:45:42.449370 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:45:42.449399 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:45:42.449428 systemd[1]: Mounting media.mount... Feb 9 09:45:42.449459 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:45:42.449491 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:45:42.449525 systemd[1]: Mounting tmp.mount... Feb 9 09:45:42.449559 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:45:42.449591 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:45:42.449638 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:45:42.449673 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:45:42.449704 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:45:42.449735 systemd[1]: Starting modprobe@drm.service... Feb 9 09:45:42.449764 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:45:42.449796 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:45:42.449825 systemd[1]: Starting modprobe@loop.service... Feb 9 09:45:42.449859 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:45:42.449892 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 09:45:42.449923 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 09:45:42.449952 systemd[1]: Starting systemd-journald.service... Feb 9 09:45:42.449981 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:45:42.450012 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:45:42.450041 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:45:42.450070 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:45:42.450102 kernel: fuse: init (API version 7.34) Feb 9 09:45:42.450141 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:45:42.450170 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:45:42.450207 systemd[1]: Mounted media.mount. Feb 9 09:45:42.450239 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:45:42.450268 kernel: loop: module loaded Feb 9 09:45:42.450297 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:45:42.450326 systemd[1]: Mounted tmp.mount. Feb 9 09:45:42.450357 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:45:42.450388 kernel: kauditd_printk_skb: 48 callbacks suppressed Feb 9 09:45:42.450419 kernel: audit: type=1130 audit(1707471942.414:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.450448 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:45:42.450479 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:45:42.450508 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:45:42.450537 kernel: audit: type=1130 audit(1707471942.439:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.450566 systemd-journald[1524]: Journal started Feb 9 09:45:42.454938 systemd-journald[1524]: Runtime Journal (/run/log/journal/ec27af2446770d899455b0743b52e912) is 8.0M, max 75.4M, 67.4M free. Feb 9 09:45:42.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.468494 kernel: audit: type=1131 audit(1707471942.439:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.468571 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:45:42.470202 systemd[1]: Started systemd-journald.service. Feb 9 09:45:42.474502 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:45:42.474928 systemd[1]: Finished modprobe@drm.service. Feb 9 09:45:42.477366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:45:42.443000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:45:42.482761 kernel: audit: type=1305 audit(1707471942.443:91): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:45:42.482807 kernel: audit: type=1300 audit(1707471942.443:91): arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffdd9b8620 a2=4000 a3=1 items=0 ppid=1 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:45:42.443000 audit[1524]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffdd9b8620 a2=4000 a3=1 items=0 ppid=1 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:45:42.443000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:45:42.494691 kernel: audit: type=1327 audit(1707471942.443:91): proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:45:42.499380 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:45:42.525350 kernel: audit: type=1130 audit(1707471942.466:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.525411 kernel: audit: type=1131 audit(1707471942.466:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.525448 kernel: audit: type=1130 audit(1707471942.470:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.525523 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:45:42.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.533705 kernel: audit: type=1130 audit(1707471942.475:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.533885 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:45:42.536254 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:45:42.536715 systemd[1]: Finished modprobe@loop.service. Feb 9 09:45:42.539118 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:45:42.549202 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:45:42.552005 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:45:42.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.554487 systemd[1]: Reached target network-pre.target. Feb 9 09:45:42.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.565305 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:45:42.569686 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:45:42.576202 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:45:42.579595 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:45:42.588902 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:45:42.590702 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:45:42.593383 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:45:42.597123 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:45:42.601350 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:45:42.608437 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:45:42.611181 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:45:42.639794 systemd-journald[1524]: Time spent on flushing to /var/log/journal/ec27af2446770d899455b0743b52e912 is 90.093ms for 1105 entries. Feb 9 09:45:42.639794 systemd-journald[1524]: System Journal (/var/log/journal/ec27af2446770d899455b0743b52e912) is 8.0M, max 195.6M, 187.6M free. Feb 9 09:45:42.751118 systemd-journald[1524]: Received client request to flush runtime journal. Feb 9 09:45:42.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.640746 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:45:42.654364 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:45:42.656354 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:45:42.697934 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:45:42.702093 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:45:42.755939 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:45:42.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.779991 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:45:42.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.784244 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:45:42.812379 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:45:42.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:42.816886 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:45:42.837342 udevadm[1585]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 09:45:42.852811 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:45:42.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:43.504391 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:45:43.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:43.508512 systemd[1]: Starting systemd-udevd.service... Feb 9 09:45:43.549770 systemd-udevd[1588]: Using default interface naming scheme 'v252'. Feb 9 09:45:43.580639 systemd[1]: Started systemd-udevd.service. Feb 9 09:45:43.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:43.588500 systemd[1]: Starting systemd-networkd.service... Feb 9 09:45:43.597251 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:45:43.675816 systemd[1]: Found device dev-ttyS0.device. Feb 9 09:45:43.689099 systemd[1]: Started systemd-userdbd.service. Feb 9 09:45:43.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:43.734753 (udev-worker)[1595]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:45:43.785657 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1599) Feb 9 09:45:43.841597 systemd-networkd[1594]: lo: Link UP Feb 9 09:45:43.841628 systemd-networkd[1594]: lo: Gained carrier Feb 9 09:45:43.844270 systemd-networkd[1594]: Enumeration completed Feb 9 09:45:43.844489 systemd[1]: Started systemd-networkd.service. Feb 9 09:45:43.846939 systemd-networkd[1594]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:45:43.851654 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:45:43.852557 systemd-networkd[1594]: eth0: Link UP Feb 9 09:45:43.852881 systemd-networkd[1594]: eth0: Gained carrier Feb 9 09:45:43.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:43.862128 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:45:43.871858 systemd-networkd[1594]: eth0: DHCPv4 address 172.31.16.76/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 09:45:44.064097 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 09:45:44.082513 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:45:44.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.087441 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:45:44.105997 lvm[1709]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:45:44.143366 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:45:44.146005 systemd[1]: Reached target cryptsetup.target. Feb 9 09:45:44.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.153102 systemd[1]: Starting lvm2-activation.service... Feb 9 09:45:44.161522 lvm[1711]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:45:44.200404 systemd[1]: Finished lvm2-activation.service. Feb 9 09:45:44.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.202372 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:45:44.204776 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:45:44.204831 systemd[1]: Reached target local-fs.target. Feb 9 09:45:44.211090 systemd[1]: Reached target machines.target. Feb 9 09:45:44.215194 systemd[1]: Starting ldconfig.service... Feb 9 09:45:44.217814 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:45:44.217918 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:45:44.220754 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:45:44.224832 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:45:44.229773 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:45:44.232168 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:45:44.232293 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:45:44.235109 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:45:44.260272 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1714 (bootctl) Feb 9 09:45:44.262682 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:45:44.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.279558 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:45:44.286099 systemd-tmpfiles[1717]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:45:44.298155 systemd-tmpfiles[1717]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:45:44.307470 systemd-tmpfiles[1717]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:45:44.404565 systemd-fsck[1723]: fsck.fat 4.2 (2021-01-31) Feb 9 09:45:44.404565 systemd-fsck[1723]: /dev/nvme0n1p1: 236 files, 113719/258078 clusters Feb 9 09:45:44.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.412492 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:45:44.417485 systemd[1]: Mounting boot.mount... Feb 9 09:45:44.442094 systemd[1]: Mounted boot.mount. Feb 9 09:45:44.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.483768 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:45:44.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.693582 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:45:44.698489 systemd[1]: Starting audit-rules.service... Feb 9 09:45:44.702877 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:45:44.713021 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:45:44.718380 systemd[1]: Starting systemd-resolved.service... Feb 9 09:45:44.733763 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:45:44.738454 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:45:44.743769 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:45:44.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.752745 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:45:44.784000 audit[1748]: SYSTEM_BOOT pid=1748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.793485 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:45:44.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.796265 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:45:44.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.801494 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:45:44.823319 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:45:44.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.844582 ldconfig[1713]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:45:44.852727 systemd[1]: Finished ldconfig.service. Feb 9 09:45:44.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.857307 systemd[1]: Starting systemd-update-done.service... Feb 9 09:45:44.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:44.880845 systemd[1]: Finished systemd-update-done.service. Feb 9 09:45:44.892791 augenrules[1768]: No rules Feb 9 09:45:44.890000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:45:44.890000 audit[1768]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffed675d20 a2=420 a3=0 items=0 ppid=1741 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:45:44.890000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:45:44.894107 systemd[1]: Finished audit-rules.service. Feb 9 09:45:44.961727 systemd-resolved[1745]: Positive Trust Anchors: Feb 9 09:45:44.962343 systemd-resolved[1745]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:45:44.962514 systemd-resolved[1745]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:45:44.974569 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:45:44.976670 systemd[1]: Reached target time-set.target. Feb 9 09:45:44.993247 systemd-resolved[1745]: Defaulting to hostname 'linux'. Feb 9 09:45:44.996340 systemd[1]: Started systemd-resolved.service. Feb 9 09:45:44.998304 systemd[1]: Reached target network.target. Feb 9 09:45:45.000099 systemd[1]: Reached target nss-lookup.target. Feb 9 09:45:45.001687 systemd[1]: Reached target sysinit.target. Feb 9 09:45:45.003660 systemd[1]: Started motdgen.path. Feb 9 09:45:45.005453 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:45:45.007938 systemd[1]: Started logrotate.timer. Feb 9 09:45:45.009566 systemd[1]: Started mdadm.timer. Feb 9 09:45:45.010984 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:45:45.012739 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:45:45.012787 systemd[1]: Reached target paths.target. Feb 9 09:45:45.014275 systemd[1]: Reached target timers.target. Feb 9 09:45:45.016276 systemd[1]: Listening on dbus.socket. Feb 9 09:45:45.019972 systemd[1]: Starting docker.socket... Feb 9 09:45:45.024245 systemd[1]: Listening on sshd.socket. Feb 9 09:45:45.026199 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:45:45.026842 systemd[1]: Listening on docker.socket. Feb 9 09:45:45.028801 systemd[1]: Reached target sockets.target. Feb 9 09:45:45.030515 systemd[1]: Reached target basic.target. Feb 9 09:45:45.032396 systemd[1]: System is tainted: cgroupsv1 Feb 9 09:45:45.032476 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:45:45.032527 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:45:45.034882 systemd[1]: Starting containerd.service... Feb 9 09:45:45.038546 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 09:45:45.043023 systemd[1]: Starting dbus.service... Feb 9 09:45:45.046673 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:45:45.051002 systemd[1]: Starting extend-filesystems.service... Feb 9 09:45:45.052751 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:45:45.056086 systemd[1]: Starting motdgen.service... Feb 9 09:45:45.066361 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:45:45.071344 systemd[1]: Starting prepare-critools.service... Feb 9 09:45:45.078781 systemd[1]: Starting prepare-helm.service... Feb 9 09:45:45.082994 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:45:45.089298 systemd[1]: Starting sshd-keygen.service... Feb 9 09:45:45.100480 systemd[1]: Starting systemd-logind.service... Feb 9 09:45:45.105777 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:45:45.105937 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:45:45.109309 systemd[1]: Starting update-engine.service... Feb 9 09:45:45.128472 jq[1781]: false Feb 9 09:45:45.123070 systemd-networkd[1594]: eth0: Gained IPv6LL Feb 9 09:45:45.126242 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:45:45.134679 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:45:45.251702 jq[1798]: true Feb 9 09:45:45.137625 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:45:45.138559 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:45:45.159939 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:45:45.184198 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:45:45.259859 tar[1803]: ./ Feb 9 09:45:45.259859 tar[1803]: ./macvlan Feb 9 09:45:45.190977 systemd[1]: Reached target network-online.target. Feb 9 09:45:45.198211 systemd[1]: Started amazon-ssm-agent.service. Feb 9 09:45:45.205852 systemd[1]: Started nvidia.service. Feb 9 09:45:45.244202 systemd-timesyncd[1747]: Contacted time server 207.246.65.226:123 (0.flatcar.pool.ntp.org). Feb 9 09:45:45.244315 systemd-timesyncd[1747]: Initial clock synchronization to Fri 2024-02-09 09:45:45.057726 UTC. Feb 9 09:45:45.278844 tar[1811]: linux-arm64/helm Feb 9 09:45:45.283001 jq[1816]: true Feb 9 09:45:45.315836 tar[1804]: crictl Feb 9 09:45:45.367882 dbus-daemon[1780]: [system] SELinux support is enabled Feb 9 09:45:45.368772 systemd[1]: Started dbus.service. Feb 9 09:45:45.373540 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:45:45.373625 systemd[1]: Reached target system-config.target. Feb 9 09:45:45.375459 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:45:45.375515 systemd[1]: Reached target user-config.target. Feb 9 09:45:45.397032 extend-filesystems[1782]: Found nvme0n1 Feb 9 09:45:45.404486 extend-filesystems[1782]: Found nvme0n1p1 Feb 9 09:45:45.406283 extend-filesystems[1782]: Found nvme0n1p2 Feb 9 09:45:45.409560 extend-filesystems[1782]: Found nvme0n1p3 Feb 9 09:45:45.413253 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:45:45.413769 systemd[1]: Finished motdgen.service. Feb 9 09:45:45.424460 extend-filesystems[1782]: Found usr Feb 9 09:45:45.432827 extend-filesystems[1782]: Found nvme0n1p4 Feb 9 09:45:45.435428 extend-filesystems[1782]: Found nvme0n1p6 Feb 9 09:45:45.437538 extend-filesystems[1782]: Found nvme0n1p7 Feb 9 09:45:45.442309 extend-filesystems[1782]: Found nvme0n1p9 Feb 9 09:45:45.445176 extend-filesystems[1782]: Checking size of /dev/nvme0n1p9 Feb 9 09:45:45.447870 dbus-daemon[1780]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1594 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 09:45:45.461868 dbus-daemon[1780]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 09:45:45.468107 systemd[1]: Starting systemd-hostnamed.service... Feb 9 09:45:45.513930 update_engine[1796]: I0209 09:45:45.505793 1796 main.cc:92] Flatcar Update Engine starting Feb 9 09:45:45.512255 systemd[1]: Started update-engine.service. Feb 9 09:45:45.517076 systemd[1]: Started locksmithd.service. Feb 9 09:45:45.522838 update_engine[1796]: I0209 09:45:45.521761 1796 update_check_scheduler.cc:74] Next update check in 5m39s Feb 9 09:45:45.558983 extend-filesystems[1782]: Resized partition /dev/nvme0n1p9 Feb 9 09:45:45.576734 extend-filesystems[1858]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 09:45:45.610658 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 09:45:45.720752 env[1817]: time="2024-02-09T09:45:45.720655792Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:45:45.722299 amazon-ssm-agent[1818]: 2024/02/09 09:45:45 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 09:45:45.735898 amazon-ssm-agent[1818]: Initializing new seelog logger Feb 9 09:45:45.739423 amazon-ssm-agent[1818]: New Seelog Logger Creation Complete Feb 9 09:45:45.739423 amazon-ssm-agent[1818]: 2024/02/09 09:45:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 09:45:45.739423 amazon-ssm-agent[1818]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 09:45:45.739423 amazon-ssm-agent[1818]: 2024/02/09 09:45:45 processing appconfig overrides Feb 9 09:45:45.739709 bash[1859]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:45:45.740669 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 09:45:45.741009 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:45:45.772102 systemd-logind[1794]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 09:45:45.772928 systemd-logind[1794]: New seat seat0. Feb 9 09:45:45.777913 extend-filesystems[1858]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 09:45:45.777913 extend-filesystems[1858]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 09:45:45.777913 extend-filesystems[1858]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 09:45:45.800568 extend-filesystems[1782]: Resized filesystem in /dev/nvme0n1p9 Feb 9 09:45:45.787911 systemd[1]: Started systemd-logind.service. Feb 9 09:45:45.804537 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:45:45.805092 systemd[1]: Finished extend-filesystems.service. Feb 9 09:45:45.819398 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 09:45:45.888026 tar[1803]: ./static Feb 9 09:45:45.996463 env[1817]: time="2024-02-09T09:45:45.996406277Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:45:46.001903 env[1817]: time="2024-02-09T09:45:46.001854225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:46.006200 env[1817]: time="2024-02-09T09:45:46.006137930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:45:46.012429 env[1817]: time="2024-02-09T09:45:46.012375157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:46.013434 env[1817]: time="2024-02-09T09:45:46.013383182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:45:46.014576 env[1817]: time="2024-02-09T09:45:46.014533964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:46.014786 env[1817]: time="2024-02-09T09:45:46.014753150Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:45:46.015505 env[1817]: time="2024-02-09T09:45:46.015469380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:46.019099 env[1817]: time="2024-02-09T09:45:46.019006159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:46.022581 env[1817]: time="2024-02-09T09:45:46.022443642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:46.031465 env[1817]: time="2024-02-09T09:45:46.031404195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:45:46.037380 env[1817]: time="2024-02-09T09:45:46.037277288Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:45:46.038109 env[1817]: time="2024-02-09T09:45:46.038055110Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:45:46.038918 env[1817]: time="2024-02-09T09:45:46.038881327Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:45:46.052774 env[1817]: time="2024-02-09T09:45:46.052688734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:45:46.052994 env[1817]: time="2024-02-09T09:45:46.052964026Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:45:46.053146 env[1817]: time="2024-02-09T09:45:46.053115292Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:45:46.053461 env[1817]: time="2024-02-09T09:45:46.053422428Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:45:46.053682 env[1817]: time="2024-02-09T09:45:46.053595048Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:45:46.053828 env[1817]: time="2024-02-09T09:45:46.053799607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:45:46.054047 env[1817]: time="2024-02-09T09:45:46.054011597Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:45:46.054867 env[1817]: time="2024-02-09T09:45:46.054821134Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:45:46.055620 env[1817]: time="2024-02-09T09:45:46.055555977Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:45:46.056708 env[1817]: time="2024-02-09T09:45:46.056674761Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:45:46.056837 env[1817]: time="2024-02-09T09:45:46.056807579Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:45:46.056977 env[1817]: time="2024-02-09T09:45:46.056948190Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:45:46.060243 env[1817]: time="2024-02-09T09:45:46.060194768Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:45:46.063984 env[1817]: time="2024-02-09T09:45:46.063932390Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:45:46.065371 env[1817]: time="2024-02-09T09:45:46.065327979Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:45:46.071085 env[1817]: time="2024-02-09T09:45:46.071020400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:45:46.071461 tar[1803]: ./vlan Feb 9 09:45:46.071656 env[1817]: time="2024-02-09T09:45:46.071578602Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:45:46.072251 env[1817]: time="2024-02-09T09:45:46.072187004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:45:46.075777 env[1817]: time="2024-02-09T09:45:46.075705698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:45:46.076655 env[1817]: time="2024-02-09T09:45:46.076544022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:45:46.076834 env[1817]: time="2024-02-09T09:45:46.076801264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:45:46.077026 env[1817]: time="2024-02-09T09:45:46.076980483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:45:46.077185 env[1817]: time="2024-02-09T09:45:46.077152786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:45:46.080452 env[1817]: time="2024-02-09T09:45:46.077354989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:45:46.080805 env[1817]: time="2024-02-09T09:45:46.080720695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:45:46.081280 env[1817]: time="2024-02-09T09:45:46.081220178Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:45:46.082226 env[1817]: time="2024-02-09T09:45:46.082179867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:45:46.082494 env[1817]: time="2024-02-09T09:45:46.082440064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:45:46.082737 env[1817]: time="2024-02-09T09:45:46.082686464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:45:46.082891 env[1817]: time="2024-02-09T09:45:46.082862952Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:45:46.083416 env[1817]: time="2024-02-09T09:45:46.083351535Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:45:46.084172 env[1817]: time="2024-02-09T09:45:46.084119957Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:45:46.084351 env[1817]: time="2024-02-09T09:45:46.084320331Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:45:46.084546 env[1817]: time="2024-02-09T09:45:46.084496890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:45:46.088358 env[1817]: time="2024-02-09T09:45:46.088207238Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:45:46.089811 env[1817]: time="2024-02-09T09:45:46.089442923Z" level=info msg="Connect containerd service" Feb 9 09:45:46.089954 env[1817]: time="2024-02-09T09:45:46.089589981Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:45:46.091801 env[1817]: time="2024-02-09T09:45:46.091736317Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:45:46.099479 env[1817]: time="2024-02-09T09:45:46.099410576Z" level=info msg="Start subscribing containerd event" Feb 9 09:45:46.099781 env[1817]: time="2024-02-09T09:45:46.099751198Z" level=info msg="Start recovering state" Feb 9 09:45:46.100849 env[1817]: time="2024-02-09T09:45:46.100796600Z" level=info msg="Start event monitor" Feb 9 09:45:46.101300 env[1817]: time="2024-02-09T09:45:46.101255260Z" level=info msg="Start snapshots syncer" Feb 9 09:45:46.101429 env[1817]: time="2024-02-09T09:45:46.101402283Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:45:46.101556 env[1817]: time="2024-02-09T09:45:46.101530013Z" level=info msg="Start streaming server" Feb 9 09:45:46.102469 env[1817]: time="2024-02-09T09:45:46.102417235Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:45:46.107178 env[1817]: time="2024-02-09T09:45:46.107112296Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:45:46.110308 env[1817]: time="2024-02-09T09:45:46.110246756Z" level=info msg="containerd successfully booted in 0.472111s" Feb 9 09:45:46.110413 systemd[1]: Started containerd.service. Feb 9 09:45:46.181834 dbus-daemon[1780]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 09:45:46.182066 systemd[1]: Started systemd-hostnamed.service. Feb 9 09:45:46.185236 dbus-daemon[1780]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1846 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 09:45:46.190339 systemd[1]: Starting polkit.service... Feb 9 09:45:46.233395 polkitd[1927]: Started polkitd version 121 Feb 9 09:45:46.265998 polkitd[1927]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 09:45:46.266675 polkitd[1927]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 09:45:46.272897 polkitd[1927]: Finished loading, compiling and executing 2 rules Feb 9 09:45:46.273926 dbus-daemon[1780]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 09:45:46.274182 systemd[1]: Started polkit.service. Feb 9 09:45:46.278252 polkitd[1927]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 09:45:46.304637 systemd[1]: Created slice system-sshd.slice. Feb 9 09:45:46.329344 systemd-resolved[1745]: System hostname changed to 'ip-172-31-16-76'. Feb 9 09:45:46.329353 systemd-hostnamed[1846]: Hostname set to (transient) Feb 9 09:45:46.340396 tar[1803]: ./portmap Feb 9 09:45:46.373458 coreos-metadata[1778]: Feb 09 09:45:46.373 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 09:45:46.375291 coreos-metadata[1778]: Feb 09 09:45:46.374 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 09:45:46.376109 coreos-metadata[1778]: Feb 09 09:45:46.375 INFO Fetch successful Feb 9 09:45:46.376109 coreos-metadata[1778]: Feb 09 09:45:46.375 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 09:45:46.377013 coreos-metadata[1778]: Feb 09 09:45:46.376 INFO Fetch successful Feb 9 09:45:46.381753 unknown[1778]: wrote ssh authorized keys file for user: core Feb 9 09:45:46.419360 update-ssh-keys[1967]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:45:46.420108 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 09:45:46.555556 tar[1803]: ./host-local Feb 9 09:45:46.700392 tar[1803]: ./vrf Feb 9 09:45:46.760511 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO Create new startup processor Feb 9 09:45:46.783520 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 09:45:46.783678 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO Initializing bookkeeping folders Feb 9 09:45:46.783678 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO removing the completed state files Feb 9 09:45:46.783678 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO Initializing bookkeeping folders for long running plugins Feb 9 09:45:46.783678 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 09:45:46.783891 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO Initializing healthcheck folders for long running plugins Feb 9 09:45:46.783891 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO Initializing locations for inventory plugin Feb 9 09:45:46.783891 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO Initializing default location for custom inventory Feb 9 09:45:46.783891 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO Initializing default location for file inventory Feb 9 09:45:46.783891 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO Initializing default location for role inventory Feb 9 09:45:46.783891 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO Init the cloudwatchlogs publisher Feb 9 09:45:46.784162 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [instanceID=i-0ddc6701b1f0fe26c] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 09:45:46.784162 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [instanceID=i-0ddc6701b1f0fe26c] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 09:45:46.784162 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [instanceID=i-0ddc6701b1f0fe26c] Successfully loaded platform independent plugin aws:downloadContent Feb 9 09:45:46.784162 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [instanceID=i-0ddc6701b1f0fe26c] Successfully loaded platform independent plugin aws:runDocument Feb 9 09:45:46.784162 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [instanceID=i-0ddc6701b1f0fe26c] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 09:45:46.784162 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [instanceID=i-0ddc6701b1f0fe26c] Successfully loaded platform independent plugin aws:configureDocker Feb 9 09:45:46.784162 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [instanceID=i-0ddc6701b1f0fe26c] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 09:45:46.784162 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [instanceID=i-0ddc6701b1f0fe26c] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 09:45:46.784162 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [instanceID=i-0ddc6701b1f0fe26c] Successfully loaded platform independent plugin aws:configurePackage Feb 9 09:45:46.784678 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [instanceID=i-0ddc6701b1f0fe26c] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 09:45:46.784678 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 09:45:46.784678 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO OS: linux, Arch: arm64 Feb 9 09:45:46.787018 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 09:45:46.796755 amazon-ssm-agent[1818]: datastore file /var/lib/amazon/ssm/i-0ddc6701b1f0fe26c/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 09:45:46.883671 tar[1803]: ./bridge Feb 9 09:45:46.885786 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 09:45:46.980708 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 09:45:47.043008 tar[1803]: ./tuning Feb 9 09:45:47.075259 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessagingDeliveryService] Starting message polling Feb 9 09:45:47.168555 tar[1803]: ./firewall Feb 9 09:45:47.169925 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 09:45:47.264911 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [instanceID=i-0ddc6701b1f0fe26c] Starting association polling Feb 9 09:45:47.325381 tar[1803]: ./host-device Feb 9 09:45:47.360045 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 09:45:47.455329 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 09:45:47.459058 tar[1803]: ./sbr Feb 9 09:45:47.494481 tar[1811]: linux-arm64/LICENSE Feb 9 09:45:47.495092 tar[1811]: linux-arm64/README.md Feb 9 09:45:47.521408 systemd[1]: Finished prepare-helm.service. Feb 9 09:45:47.552515 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 09:45:47.583190 tar[1803]: ./loopback Feb 9 09:45:47.600561 systemd[1]: Finished prepare-critools.service. Feb 9 09:45:47.644636 tar[1803]: ./dhcp Feb 9 09:45:47.648187 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 09:45:47.744123 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 09:45:47.761007 tar[1803]: ./ptp Feb 9 09:45:47.810437 tar[1803]: ./ipvlan Feb 9 09:45:47.840197 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 09:45:47.859130 tar[1803]: ./bandwidth Feb 9 09:45:47.937653 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 09:45:47.934407 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:45:47.953564 locksmithd[1850]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:45:48.033720 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 09:45:48.130367 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 09:45:48.227226 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0ddc6701b1f0fe26c, requestId: 3cf785c5-6e07-4386-a2b8-b8e92e74b264 Feb 9 09:45:48.324294 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [OfflineService] Starting document processing engine... Feb 9 09:45:48.421563 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [OfflineService] [EngineProcessor] Starting Feb 9 09:45:48.519015 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 09:45:48.616722 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [OfflineService] Starting message polling Feb 9 09:45:48.714501 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [OfflineService] Starting send replies to MDS Feb 9 09:45:48.812551 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 09:45:48.910838 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 09:45:49.009229 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessageGatewayService] listening reply. Feb 9 09:45:49.107839 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 09:45:49.206670 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [StartupProcessor] Executing startup processor tasks Feb 9 09:45:49.305706 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 09:45:49.404940 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 09:45:49.504438 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 09:45:49.603963 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0ddc6701b1f0fe26c?role=subscribe&stream=input Feb 9 09:45:49.703793 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0ddc6701b1f0fe26c?role=subscribe&stream=input Feb 9 09:45:49.803839 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 09:45:49.903990 amazon-ssm-agent[1818]: 2024-02-09 09:45:46 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 09:45:51.336933 sshd_keygen[1836]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:45:51.373200 systemd[1]: Finished sshd-keygen.service. Feb 9 09:45:51.378814 systemd[1]: Starting issuegen.service... Feb 9 09:45:51.383530 systemd[1]: Started sshd@0-172.31.16.76:22-139.178.89.65:57780.service. Feb 9 09:45:51.397135 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:45:51.397668 systemd[1]: Finished issuegen.service. Feb 9 09:45:51.403324 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:45:51.419757 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:45:51.424414 systemd[1]: Started getty@tty1.service. Feb 9 09:45:51.430084 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 09:45:51.432765 systemd[1]: Reached target getty.target. Feb 9 09:45:51.435519 systemd[1]: Reached target multi-user.target. Feb 9 09:45:51.441651 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:45:51.459076 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:45:51.459584 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:45:51.465405 systemd[1]: Startup finished in 25.871s (kernel) + 13.142s (userspace) = 39.013s. Feb 9 09:45:51.617535 sshd[2014]: Accepted publickey for core from 139.178.89.65 port 57780 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:45:51.622627 sshd[2014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:45:51.643133 systemd[1]: Created slice user-500.slice. Feb 9 09:45:51.645520 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:45:51.651075 systemd-logind[1794]: New session 1 of user core. Feb 9 09:45:51.665798 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:45:51.670360 systemd[1]: Starting user@500.service... Feb 9 09:45:51.681291 (systemd)[2028]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:45:51.862423 systemd[2028]: Queued start job for default target default.target. Feb 9 09:45:51.863566 systemd[2028]: Reached target paths.target. Feb 9 09:45:51.863660 systemd[2028]: Reached target sockets.target. Feb 9 09:45:51.863695 systemd[2028]: Reached target timers.target. Feb 9 09:45:51.863725 systemd[2028]: Reached target basic.target. Feb 9 09:45:51.863821 systemd[2028]: Reached target default.target. Feb 9 09:45:51.863885 systemd[2028]: Startup finished in 171ms. Feb 9 09:45:51.864060 systemd[1]: Started user@500.service. Feb 9 09:45:51.866126 systemd[1]: Started session-1.scope. Feb 9 09:45:52.012177 systemd[1]: Started sshd@1-172.31.16.76:22-139.178.89.65:34462.service. Feb 9 09:45:52.189956 sshd[2037]: Accepted publickey for core from 139.178.89.65 port 34462 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:45:52.192927 sshd[2037]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:45:52.201821 systemd[1]: Started session-2.scope. Feb 9 09:45:52.203713 systemd-logind[1794]: New session 2 of user core. Feb 9 09:45:52.339118 sshd[2037]: pam_unix(sshd:session): session closed for user core Feb 9 09:45:52.344589 systemd-logind[1794]: Session 2 logged out. Waiting for processes to exit. Feb 9 09:45:52.346152 systemd[1]: sshd@1-172.31.16.76:22-139.178.89.65:34462.service: Deactivated successfully. Feb 9 09:45:52.347643 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 09:45:52.348977 systemd-logind[1794]: Removed session 2. Feb 9 09:45:52.365231 systemd[1]: Started sshd@2-172.31.16.76:22-139.178.89.65:34476.service. Feb 9 09:45:52.540108 sshd[2044]: Accepted publickey for core from 139.178.89.65 port 34476 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:45:52.543019 sshd[2044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:45:52.550491 systemd-logind[1794]: New session 3 of user core. Feb 9 09:45:52.551406 systemd[1]: Started session-3.scope. Feb 9 09:45:52.673094 sshd[2044]: pam_unix(sshd:session): session closed for user core Feb 9 09:45:52.678309 systemd[1]: sshd@2-172.31.16.76:22-139.178.89.65:34476.service: Deactivated successfully. Feb 9 09:45:52.679706 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 09:45:52.682050 systemd-logind[1794]: Session 3 logged out. Waiting for processes to exit. Feb 9 09:45:52.684371 systemd-logind[1794]: Removed session 3. Feb 9 09:45:52.698254 systemd[1]: Started sshd@3-172.31.16.76:22-139.178.89.65:34490.service. Feb 9 09:45:52.867747 sshd[2051]: Accepted publickey for core from 139.178.89.65 port 34490 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:45:52.870185 sshd[2051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:45:52.878082 systemd-logind[1794]: New session 4 of user core. Feb 9 09:45:52.879115 systemd[1]: Started session-4.scope. Feb 9 09:45:53.010673 sshd[2051]: pam_unix(sshd:session): session closed for user core Feb 9 09:45:53.015314 systemd[1]: sshd@3-172.31.16.76:22-139.178.89.65:34490.service: Deactivated successfully. Feb 9 09:45:53.017261 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:45:53.018854 systemd-logind[1794]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:45:53.021053 systemd-logind[1794]: Removed session 4. Feb 9 09:45:53.035533 systemd[1]: Started sshd@4-172.31.16.76:22-139.178.89.65:34494.service. Feb 9 09:45:53.208863 sshd[2058]: Accepted publickey for core from 139.178.89.65 port 34494 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:45:53.211848 sshd[2058]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:45:53.219782 systemd-logind[1794]: New session 5 of user core. Feb 9 09:45:53.220051 systemd[1]: Started session-5.scope. Feb 9 09:45:53.342719 sudo[2062]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:45:53.343858 sudo[2062]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:45:54.021698 systemd[1]: Starting docker.service... Feb 9 09:45:54.098281 env[2077]: time="2024-02-09T09:45:54.098187987Z" level=info msg="Starting up" Feb 9 09:45:54.101249 env[2077]: time="2024-02-09T09:45:54.101203574Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:45:54.101526 env[2077]: time="2024-02-09T09:45:54.101496435Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:45:54.101687 env[2077]: time="2024-02-09T09:45:54.101655020Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:45:54.101793 env[2077]: time="2024-02-09T09:45:54.101767037Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:45:54.104687 env[2077]: time="2024-02-09T09:45:54.104639823Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:45:54.104687 env[2077]: time="2024-02-09T09:45:54.104679380Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:45:54.104894 env[2077]: time="2024-02-09T09:45:54.104713913Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:45:54.104894 env[2077]: time="2024-02-09T09:45:54.104740590Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:45:54.638403 env[2077]: time="2024-02-09T09:45:54.638333217Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 09:45:54.638403 env[2077]: time="2024-02-09T09:45:54.638379309Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 09:45:54.638747 env[2077]: time="2024-02-09T09:45:54.638722180Z" level=info msg="Loading containers: start." Feb 9 09:45:54.808783 kernel: Initializing XFRM netlink socket Feb 9 09:45:54.853008 env[2077]: time="2024-02-09T09:45:54.852964482Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:45:54.856417 (udev-worker)[2088]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:45:54.950324 systemd-networkd[1594]: docker0: Link UP Feb 9 09:45:54.967687 env[2077]: time="2024-02-09T09:45:54.967616767Z" level=info msg="Loading containers: done." Feb 9 09:45:54.996002 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck548378065-merged.mount: Deactivated successfully. Feb 9 09:45:55.006366 env[2077]: time="2024-02-09T09:45:55.006292096Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:45:55.007038 env[2077]: time="2024-02-09T09:45:55.007007209Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:45:55.007392 env[2077]: time="2024-02-09T09:45:55.007367060Z" level=info msg="Daemon has completed initialization" Feb 9 09:45:55.032303 systemd[1]: Started docker.service. Feb 9 09:45:55.048205 env[2077]: time="2024-02-09T09:45:55.048094040Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:45:55.080435 systemd[1]: Reloading. Feb 9 09:45:55.215979 /usr/lib/systemd/system-generators/torcx-generator[2214]: time="2024-02-09T09:45:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:45:55.216775 /usr/lib/systemd/system-generators/torcx-generator[2214]: time="2024-02-09T09:45:55Z" level=info msg="torcx already run" Feb 9 09:45:55.391388 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:45:55.391428 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:45:55.430323 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:45:55.618863 systemd[1]: Started kubelet.service. Feb 9 09:45:55.770978 kubelet[2276]: E0209 09:45:55.770886 2276 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:45:55.776375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:45:55.776856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:45:56.227272 env[1817]: time="2024-02-09T09:45:56.227218317Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 09:45:56.802725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3153736091.mount: Deactivated successfully. Feb 9 09:45:58.224494 amazon-ssm-agent[1818]: 2024-02-09 09:45:58 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 09:45:59.117683 env[1817]: time="2024-02-09T09:45:59.117592900Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:45:59.123180 env[1817]: time="2024-02-09T09:45:59.123127483Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:45:59.128455 env[1817]: time="2024-02-09T09:45:59.128405411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:45:59.133338 env[1817]: time="2024-02-09T09:45:59.133291045Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:45:59.135409 env[1817]: time="2024-02-09T09:45:59.135350362Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 09:45:59.154869 env[1817]: time="2024-02-09T09:45:59.154781908Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 09:46:01.469405 env[1817]: time="2024-02-09T09:46:01.469335794Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:01.472310 env[1817]: time="2024-02-09T09:46:01.472229102Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:01.475757 env[1817]: time="2024-02-09T09:46:01.475708346Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:01.479084 env[1817]: time="2024-02-09T09:46:01.479036603Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:01.480876 env[1817]: time="2024-02-09T09:46:01.480801795Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 09:46:01.501228 env[1817]: time="2024-02-09T09:46:01.501173254Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 09:46:03.118179 env[1817]: time="2024-02-09T09:46:03.118098538Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:03.121502 env[1817]: time="2024-02-09T09:46:03.121440279Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:03.124863 env[1817]: time="2024-02-09T09:46:03.124801556Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:03.128188 env[1817]: time="2024-02-09T09:46:03.128126250Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:03.130080 env[1817]: time="2024-02-09T09:46:03.130028906Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 09:46:03.147653 env[1817]: time="2024-02-09T09:46:03.147582412Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:46:04.684765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196083514.mount: Deactivated successfully. Feb 9 09:46:05.409088 env[1817]: time="2024-02-09T09:46:05.409028953Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:05.418809 env[1817]: time="2024-02-09T09:46:05.418738966Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:05.436779 env[1817]: time="2024-02-09T09:46:05.436727598Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:05.444044 env[1817]: time="2024-02-09T09:46:05.443301487Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:05.444044 env[1817]: time="2024-02-09T09:46:05.443827161Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 09:46:05.462862 env[1817]: time="2024-02-09T09:46:05.462803287Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:46:05.931452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:46:05.931838 systemd[1]: Stopped kubelet.service. Feb 9 09:46:05.934885 systemd[1]: Started kubelet.service. Feb 9 09:46:06.041111 kubelet[2318]: E0209 09:46:06.041040 2318 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:46:06.048897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:46:06.049295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:46:06.810065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878793918.mount: Deactivated successfully. Feb 9 09:46:06.818737 env[1817]: time="2024-02-09T09:46:06.818680392Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:06.821826 env[1817]: time="2024-02-09T09:46:06.821777596Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:06.824547 env[1817]: time="2024-02-09T09:46:06.824502621Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:06.826761 env[1817]: time="2024-02-09T09:46:06.826698880Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:06.828252 env[1817]: time="2024-02-09T09:46:06.828186434Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 09:46:06.845962 env[1817]: time="2024-02-09T09:46:06.845914299Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 09:46:07.993379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873231868.mount: Deactivated successfully. Feb 9 09:46:10.850553 env[1817]: time="2024-02-09T09:46:10.850483810Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:10.853592 env[1817]: time="2024-02-09T09:46:10.853531004Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:10.856918 env[1817]: time="2024-02-09T09:46:10.856860172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:10.859942 env[1817]: time="2024-02-09T09:46:10.859889899Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:10.861241 env[1817]: time="2024-02-09T09:46:10.861198917Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 09:46:10.878397 env[1817]: time="2024-02-09T09:46:10.878326160Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 09:46:11.477122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4161341921.mount: Deactivated successfully. Feb 9 09:46:12.185822 env[1817]: time="2024-02-09T09:46:12.185759506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:12.188419 env[1817]: time="2024-02-09T09:46:12.188373314Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:12.191119 env[1817]: time="2024-02-09T09:46:12.191056372Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:12.193506 env[1817]: time="2024-02-09T09:46:12.193457693Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:12.194695 env[1817]: time="2024-02-09T09:46:12.194650002Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 09:46:16.181432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 09:46:16.181801 systemd[1]: Stopped kubelet.service. Feb 9 09:46:16.184545 systemd[1]: Started kubelet.service. Feb 9 09:46:16.300758 kubelet[2391]: E0209 09:46:16.300684 2391 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:46:16.305094 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:46:16.305474 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:46:16.361775 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 09:46:21.142369 systemd[1]: Stopped kubelet.service. Feb 9 09:46:21.172495 systemd[1]: Reloading. Feb 9 09:46:21.298519 /usr/lib/systemd/system-generators/torcx-generator[2424]: time="2024-02-09T09:46:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:46:21.299183 /usr/lib/systemd/system-generators/torcx-generator[2424]: time="2024-02-09T09:46:21Z" level=info msg="torcx already run" Feb 9 09:46:21.468810 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:46:21.468850 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:46:21.508119 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:46:21.723135 systemd[1]: Started kubelet.service. Feb 9 09:46:21.821172 kubelet[2486]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:46:21.821833 kubelet[2486]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:46:21.822104 kubelet[2486]: I0209 09:46:21.822046 2486 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:46:21.824478 kubelet[2486]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:46:21.824705 kubelet[2486]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:46:23.367142 kubelet[2486]: I0209 09:46:23.367095 2486 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:46:23.367142 kubelet[2486]: I0209 09:46:23.367145 2486 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:46:23.367863 kubelet[2486]: I0209 09:46:23.367484 2486 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:46:23.373268 kubelet[2486]: I0209 09:46:23.373233 2486 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:46:23.373807 kubelet[2486]: E0209 09:46:23.373780 2486 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:23.376506 kubelet[2486]: W0209 09:46:23.376476 2486 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:46:23.377790 kubelet[2486]: I0209 09:46:23.377755 2486 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:46:23.378515 kubelet[2486]: I0209 09:46:23.378488 2486 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:46:23.378655 kubelet[2486]: I0209 09:46:23.378625 2486 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:46:23.378801 kubelet[2486]: I0209 09:46:23.378690 2486 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:46:23.378801 kubelet[2486]: I0209 09:46:23.378716 2486 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:46:23.378941 kubelet[2486]: I0209 09:46:23.378898 2486 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:46:23.389084 kubelet[2486]: I0209 09:46:23.389038 2486 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:46:23.389084 kubelet[2486]: I0209 09:46:23.389086 2486 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:46:23.389328 kubelet[2486]: I0209 09:46:23.389176 2486 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:46:23.389328 kubelet[2486]: I0209 09:46:23.389200 2486 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:46:23.391237 kubelet[2486]: I0209 09:46:23.391187 2486 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:46:23.391891 kubelet[2486]: W0209 09:46:23.391839 2486 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:46:23.392631 kubelet[2486]: I0209 09:46:23.392568 2486 server.go:1186] "Started kubelet" Feb 9 09:46:23.392923 kubelet[2486]: W0209 09:46:23.392849 2486 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.16.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-76&limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:23.393004 kubelet[2486]: E0209 09:46:23.392942 2486 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-76&limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:23.393109 kubelet[2486]: W0209 09:46:23.393050 2486 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.16.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:23.393187 kubelet[2486]: E0209 09:46:23.393122 2486 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:23.394096 kubelet[2486]: I0209 09:46:23.394046 2486 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:46:23.395117 kubelet[2486]: I0209 09:46:23.395070 2486 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:46:23.399352 kubelet[2486]: E0209 09:46:23.399143 2486 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-16-76.17b228b8ea0b18ae", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-16-76", UID:"ip-172-31-16-76", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-16-76"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 46, 23, 392528558, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 46, 23, 392528558, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.16.76:6443/api/v1/namespaces/default/events": dial tcp 172.31.16.76:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:46:23.400169 kubelet[2486]: E0209 09:46:23.400137 2486 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:46:23.400331 kubelet[2486]: E0209 09:46:23.400310 2486 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:46:23.403147 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:46:23.405096 kubelet[2486]: I0209 09:46:23.403835 2486 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:46:23.408163 kubelet[2486]: I0209 09:46:23.408119 2486 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:46:23.408508 kubelet[2486]: I0209 09:46:23.408473 2486 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:46:23.409268 kubelet[2486]: W0209 09:46:23.409205 2486 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.16.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:23.409511 kubelet[2486]: E0209 09:46:23.409488 2486 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:23.410516 kubelet[2486]: E0209 09:46:23.410472 2486 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.16.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-76?timeout=10s": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:23.511998 kubelet[2486]: I0209 09:46:23.511942 2486 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:46:23.511998 kubelet[2486]: I0209 09:46:23.511981 2486 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:46:23.512219 kubelet[2486]: I0209 09:46:23.512010 2486 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:46:23.513851 kubelet[2486]: I0209 09:46:23.513804 2486 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-76" Feb 9 09:46:23.514481 kubelet[2486]: I0209 09:46:23.514438 2486 policy_none.go:49] "None policy: Start" Feb 9 09:46:23.515629 kubelet[2486]: I0209 09:46:23.515568 2486 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:46:23.515843 kubelet[2486]: E0209 09:46:23.515770 2486 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.16.76:6443/api/v1/nodes\": dial tcp 172.31.16.76:6443: connect: connection refused" node="ip-172-31-16-76" Feb 9 09:46:23.515975 kubelet[2486]: I0209 09:46:23.515955 2486 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:46:23.534015 kubelet[2486]: I0209 09:46:23.533977 2486 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:46:23.534526 kubelet[2486]: I0209 09:46:23.534505 2486 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:46:23.541392 kubelet[2486]: E0209 09:46:23.541359 2486 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-76\" not found" Feb 9 09:46:23.553455 kubelet[2486]: I0209 09:46:23.553393 2486 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:46:23.597033 kubelet[2486]: I0209 09:46:23.596999 2486 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:46:23.597231 kubelet[2486]: I0209 09:46:23.597209 2486 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:46:23.597369 kubelet[2486]: I0209 09:46:23.597347 2486 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:46:23.597531 kubelet[2486]: E0209 09:46:23.597512 2486 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:46:23.598468 kubelet[2486]: W0209 09:46:23.598393 2486 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.16.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:23.598774 kubelet[2486]: E0209 09:46:23.598748 2486 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:23.611571 kubelet[2486]: E0209 09:46:23.611514 2486 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.16.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-76?timeout=10s": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:23.698725 kubelet[2486]: I0209 09:46:23.698667 2486 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:46:23.700889 kubelet[2486]: I0209 09:46:23.700857 2486 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:46:23.706022 kubelet[2486]: I0209 09:46:23.705989 2486 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:46:23.708848 kubelet[2486]: I0209 09:46:23.708813 2486 status_manager.go:698] "Failed to get status for pod" podUID=7a743495506f57fe8ce12d2054e3d7d9 pod="kube-system/kube-apiserver-ip-172-31-16-76" err="Get \"https://172.31.16.76:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-16-76\": dial tcp 172.31.16.76:6443: connect: connection refused" Feb 9 09:46:23.718367 kubelet[2486]: I0209 09:46:23.718316 2486 status_manager.go:698] "Failed to get status for pod" podUID=9a1c51f6558c9b9fb3d526ef30aad8db pod="kube-system/kube-controller-manager-ip-172-31-16-76" err="Get \"https://172.31.16.76:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-31-16-76\": dial tcp 172.31.16.76:6443: connect: connection refused" Feb 9 09:46:23.722970 kubelet[2486]: I0209 09:46:23.722915 2486 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-76" Feb 9 09:46:23.723845 kubelet[2486]: I0209 09:46:23.723805 2486 status_manager.go:698] "Failed to get status for pod" podUID=0800b07609d1f2f4d97d0a1d19ea4611 pod="kube-system/kube-scheduler-ip-172-31-16-76" err="Get \"https://172.31.16.76:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-31-16-76\": dial tcp 172.31.16.76:6443: connect: connection refused" Feb 9 09:46:23.723985 kubelet[2486]: E0209 09:46:23.723909 2486 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.16.76:6443/api/v1/nodes\": dial tcp 172.31.16.76:6443: connect: connection refused" node="ip-172-31-16-76" Feb 9 09:46:23.809404 kubelet[2486]: I0209 09:46:23.809354 2486 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a743495506f57fe8ce12d2054e3d7d9-ca-certs\") pod \"kube-apiserver-ip-172-31-16-76\" (UID: \"7a743495506f57fe8ce12d2054e3d7d9\") " pod="kube-system/kube-apiserver-ip-172-31-16-76" Feb 9 09:46:23.809595 kubelet[2486]: I0209 09:46:23.809437 2486 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a1c51f6558c9b9fb3d526ef30aad8db-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-76\" (UID: \"9a1c51f6558c9b9fb3d526ef30aad8db\") " pod="kube-system/kube-controller-manager-ip-172-31-16-76" Feb 9 09:46:23.809595 kubelet[2486]: I0209 09:46:23.809487 2486 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a1c51f6558c9b9fb3d526ef30aad8db-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-76\" (UID: \"9a1c51f6558c9b9fb3d526ef30aad8db\") " pod="kube-system/kube-controller-manager-ip-172-31-16-76" Feb 9 09:46:23.809595 kubelet[2486]: I0209 09:46:23.809533 2486 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a1c51f6558c9b9fb3d526ef30aad8db-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-76\" (UID: \"9a1c51f6558c9b9fb3d526ef30aad8db\") " pod="kube-system/kube-controller-manager-ip-172-31-16-76" Feb 9 09:46:23.809595 kubelet[2486]: I0209 09:46:23.809582 2486 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a1c51f6558c9b9fb3d526ef30aad8db-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-76\" (UID: \"9a1c51f6558c9b9fb3d526ef30aad8db\") " pod="kube-system/kube-controller-manager-ip-172-31-16-76" Feb 9 09:46:23.809886 kubelet[2486]: I0209 09:46:23.809650 2486 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0800b07609d1f2f4d97d0a1d19ea4611-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-76\" (UID: \"0800b07609d1f2f4d97d0a1d19ea4611\") " pod="kube-system/kube-scheduler-ip-172-31-16-76" Feb 9 09:46:23.809886 kubelet[2486]: I0209 09:46:23.809695 2486 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a743495506f57fe8ce12d2054e3d7d9-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-76\" (UID: \"7a743495506f57fe8ce12d2054e3d7d9\") " pod="kube-system/kube-apiserver-ip-172-31-16-76" Feb 9 09:46:23.809886 kubelet[2486]: I0209 09:46:23.809740 2486 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a743495506f57fe8ce12d2054e3d7d9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-76\" (UID: \"7a743495506f57fe8ce12d2054e3d7d9\") " pod="kube-system/kube-apiserver-ip-172-31-16-76" Feb 9 09:46:23.809886 kubelet[2486]: I0209 09:46:23.809788 2486 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a1c51f6558c9b9fb3d526ef30aad8db-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-76\" (UID: \"9a1c51f6558c9b9fb3d526ef30aad8db\") " pod="kube-system/kube-controller-manager-ip-172-31-16-76" Feb 9 09:46:24.013201 kubelet[2486]: E0209 09:46:24.013066 2486 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.16.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-76?timeout=10s": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:24.016308 env[1817]: time="2024-02-09T09:46:24.016235293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-76,Uid:9a1c51f6558c9b9fb3d526ef30aad8db,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:24.019422 env[1817]: time="2024-02-09T09:46:24.019029033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-76,Uid:7a743495506f57fe8ce12d2054e3d7d9,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:24.023835 env[1817]: time="2024-02-09T09:46:24.023760958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-76,Uid:0800b07609d1f2f4d97d0a1d19ea4611,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:24.125991 kubelet[2486]: I0209 09:46:24.125901 2486 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-76" Feb 9 09:46:24.126649 kubelet[2486]: E0209 09:46:24.126592 2486 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.16.76:6443/api/v1/nodes\": dial tcp 172.31.16.76:6443: connect: connection refused" node="ip-172-31-16-76" Feb 9 09:46:24.487206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229535072.mount: Deactivated successfully. Feb 9 09:46:24.498015 env[1817]: time="2024-02-09T09:46:24.497959417Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:24.504749 env[1817]: time="2024-02-09T09:46:24.504677551Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:24.506336 env[1817]: time="2024-02-09T09:46:24.506282628Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:24.509352 env[1817]: time="2024-02-09T09:46:24.509286003Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:24.511097 env[1817]: time="2024-02-09T09:46:24.511054742Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:24.513665 env[1817]: time="2024-02-09T09:46:24.513556536Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:24.518093 env[1817]: time="2024-02-09T09:46:24.518043869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:24.532822 env[1817]: time="2024-02-09T09:46:24.532760579Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:24.537421 env[1817]: time="2024-02-09T09:46:24.537349806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:24.539802 env[1817]: time="2024-02-09T09:46:24.539743777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:24.542858 env[1817]: time="2024-02-09T09:46:24.542802318Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:24.545647 kubelet[2486]: W0209 09:46:24.545546 2486 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.16.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:24.545647 kubelet[2486]: E0209 09:46:24.545640 2486 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:24.559179 env[1817]: time="2024-02-09T09:46:24.559061524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:24.607745 env[1817]: time="2024-02-09T09:46:24.606764155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:24.607745 env[1817]: time="2024-02-09T09:46:24.606836914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:24.607745 env[1817]: time="2024-02-09T09:46:24.606862366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:24.607745 env[1817]: time="2024-02-09T09:46:24.607351815Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b17a102e25832ec06d73ed4ffaec37097b3c0e541bf29fb8ab138ca86e9266f pid=2576 runtime=io.containerd.runc.v2 Feb 9 09:46:24.610092 env[1817]: time="2024-02-09T09:46:24.609975365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:24.610301 env[1817]: time="2024-02-09T09:46:24.610055864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:24.610546 env[1817]: time="2024-02-09T09:46:24.610455801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:24.611100 env[1817]: time="2024-02-09T09:46:24.611000079Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/57520e5674700b14c1d880529f18d3e68620333591c1b04b000bcd21287fc04f pid=2570 runtime=io.containerd.runc.v2 Feb 9 09:46:24.641188 env[1817]: time="2024-02-09T09:46:24.637885179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:24.641188 env[1817]: time="2024-02-09T09:46:24.637952189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:24.641188 env[1817]: time="2024-02-09T09:46:24.637977438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:24.641188 env[1817]: time="2024-02-09T09:46:24.639461875Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72d057519198fd5424a333ce48557c2c4f755070bb7a6c8c67eadfd28b0eed3d pid=2607 runtime=io.containerd.runc.v2 Feb 9 09:46:24.693536 kubelet[2486]: W0209 09:46:24.693370 2486 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.16.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:24.693536 kubelet[2486]: E0209 09:46:24.693487 2486 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:24.784991 env[1817]: time="2024-02-09T09:46:24.784793216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-76,Uid:9a1c51f6558c9b9fb3d526ef30aad8db,Namespace:kube-system,Attempt:0,} returns sandbox id \"57520e5674700b14c1d880529f18d3e68620333591c1b04b000bcd21287fc04f\"" Feb 9 09:46:24.792728 env[1817]: time="2024-02-09T09:46:24.792263343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-76,Uid:0800b07609d1f2f4d97d0a1d19ea4611,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b17a102e25832ec06d73ed4ffaec37097b3c0e541bf29fb8ab138ca86e9266f\"" Feb 9 09:46:24.797702 env[1817]: time="2024-02-09T09:46:24.797649564Z" level=info msg="CreateContainer within sandbox \"57520e5674700b14c1d880529f18d3e68620333591c1b04b000bcd21287fc04f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 09:46:24.800147 env[1817]: time="2024-02-09T09:46:24.800046956Z" level=info msg="CreateContainer within sandbox \"8b17a102e25832ec06d73ed4ffaec37097b3c0e541bf29fb8ab138ca86e9266f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 09:46:24.801218 kubelet[2486]: W0209 09:46:24.801141 2486 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.16.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:24.801218 kubelet[2486]: E0209 09:46:24.801226 2486 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:24.814520 kubelet[2486]: E0209 09:46:24.814447 2486 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.16.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-76?timeout=10s": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:24.832759 env[1817]: time="2024-02-09T09:46:24.832698642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-76,Uid:7a743495506f57fe8ce12d2054e3d7d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"72d057519198fd5424a333ce48557c2c4f755070bb7a6c8c67eadfd28b0eed3d\"" Feb 9 09:46:24.840473 env[1817]: time="2024-02-09T09:46:24.840395504Z" level=info msg="CreateContainer within sandbox \"72d057519198fd5424a333ce48557c2c4f755070bb7a6c8c67eadfd28b0eed3d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 09:46:24.843413 env[1817]: time="2024-02-09T09:46:24.843343126Z" level=info msg="CreateContainer within sandbox \"57520e5674700b14c1d880529f18d3e68620333591c1b04b000bcd21287fc04f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b120519ce6f48968869c98223cb3aa699a376d5a5d5883edc446e3a8436cf1aa\"" Feb 9 09:46:24.844698 env[1817]: time="2024-02-09T09:46:24.844646537Z" level=info msg="StartContainer for \"b120519ce6f48968869c98223cb3aa699a376d5a5d5883edc446e3a8436cf1aa\"" Feb 9 09:46:24.849377 env[1817]: time="2024-02-09T09:46:24.849282458Z" level=info msg="CreateContainer within sandbox \"8b17a102e25832ec06d73ed4ffaec37097b3c0e541bf29fb8ab138ca86e9266f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3dbeb7516d9290f93c1938676d2430ec1f2280ea10c3921862e1be4c768db9ae\"" Feb 9 09:46:24.850205 env[1817]: time="2024-02-09T09:46:24.850154371Z" level=info msg="StartContainer for \"3dbeb7516d9290f93c1938676d2430ec1f2280ea10c3921862e1be4c768db9ae\"" Feb 9 09:46:24.871406 env[1817]: time="2024-02-09T09:46:24.871325950Z" level=info msg="CreateContainer within sandbox \"72d057519198fd5424a333ce48557c2c4f755070bb7a6c8c67eadfd28b0eed3d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d6d0ec3ec1aeebd436f3faebf2657bfa51b6f0bcadf1c1dad3fb8ac46cb0f375\"" Feb 9 09:46:24.872252 env[1817]: time="2024-02-09T09:46:24.872211963Z" level=info msg="StartContainer for \"d6d0ec3ec1aeebd436f3faebf2657bfa51b6f0bcadf1c1dad3fb8ac46cb0f375\"" Feb 9 09:46:24.906672 kubelet[2486]: W0209 09:46:24.902627 2486 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.16.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-76&limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:24.906672 kubelet[2486]: E0209 09:46:24.902723 2486 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-76&limit=500&resourceVersion=0": dial tcp 172.31.16.76:6443: connect: connection refused Feb 9 09:46:24.931000 kubelet[2486]: I0209 09:46:24.930955 2486 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-76" Feb 9 09:46:24.931798 kubelet[2486]: E0209 09:46:24.931759 2486 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.16.76:6443/api/v1/nodes\": dial tcp 172.31.16.76:6443: connect: connection refused" node="ip-172-31-16-76" Feb 9 09:46:25.040411 env[1817]: time="2024-02-09T09:46:25.040271663Z" level=info msg="StartContainer for \"d6d0ec3ec1aeebd436f3faebf2657bfa51b6f0bcadf1c1dad3fb8ac46cb0f375\" returns successfully" Feb 9 09:46:25.127519 env[1817]: time="2024-02-09T09:46:25.127455603Z" level=info msg="StartContainer for \"3dbeb7516d9290f93c1938676d2430ec1f2280ea10c3921862e1be4c768db9ae\" returns successfully" Feb 9 09:46:25.168465 env[1817]: time="2024-02-09T09:46:25.168400164Z" level=info msg="StartContainer for \"b120519ce6f48968869c98223cb3aa699a376d5a5d5883edc446e3a8436cf1aa\" returns successfully" Feb 9 09:46:26.533816 kubelet[2486]: I0209 09:46:26.533777 2486 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-76" Feb 9 09:46:28.260858 amazon-ssm-agent[1818]: 2024-02-09 09:46:28 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 09:46:29.548011 kubelet[2486]: E0209 09:46:29.547973 2486 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-76\" not found" node="ip-172-31-16-76" Feb 9 09:46:29.582544 kubelet[2486]: I0209 09:46:29.582502 2486 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-16-76" Feb 9 09:46:30.391974 kubelet[2486]: I0209 09:46:30.391879 2486 apiserver.go:52] "Watching apiserver" Feb 9 09:46:30.409058 kubelet[2486]: I0209 09:46:30.409020 2486 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:46:30.454195 kubelet[2486]: I0209 09:46:30.454150 2486 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:46:30.454518 update_engine[1796]: I0209 09:46:30.454470 1796 update_attempter.cc:509] Updating boot flags... Feb 9 09:46:32.380950 systemd[1]: Reloading. Feb 9 09:46:32.516545 /usr/lib/systemd/system-generators/torcx-generator[3000]: time="2024-02-09T09:46:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:46:32.529795 /usr/lib/systemd/system-generators/torcx-generator[3000]: time="2024-02-09T09:46:32Z" level=info msg="torcx already run" Feb 9 09:46:32.710083 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:46:32.710310 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:46:32.749419 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:46:32.985568 systemd[1]: Stopping kubelet.service... Feb 9 09:46:33.006500 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 09:46:33.007239 systemd[1]: Stopped kubelet.service. Feb 9 09:46:33.012892 systemd[1]: Started kubelet.service. Feb 9 09:46:33.226583 sudo[3071]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 09:46:33.227118 sudo[3071]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 09:46:33.231445 kubelet[3060]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:46:33.232058 kubelet[3060]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:46:33.232482 kubelet[3060]: I0209 09:46:33.232390 3060 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:46:33.243545 kubelet[3060]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:46:33.248785 kubelet[3060]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:46:33.258271 kubelet[3060]: I0209 09:46:33.258233 3060 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:46:33.258535 kubelet[3060]: I0209 09:46:33.258513 3060 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:46:33.259340 kubelet[3060]: I0209 09:46:33.259307 3060 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:46:33.262367 kubelet[3060]: I0209 09:46:33.262306 3060 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 09:46:33.264116 kubelet[3060]: I0209 09:46:33.264085 3060 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:46:33.268700 kubelet[3060]: W0209 09:46:33.268668 3060 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:46:33.270380 kubelet[3060]: I0209 09:46:33.270340 3060 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:46:33.272066 kubelet[3060]: I0209 09:46:33.272020 3060 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:46:33.272407 kubelet[3060]: I0209 09:46:33.272364 3060 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:46:33.272692 kubelet[3060]: I0209 09:46:33.272670 3060 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:46:33.272850 kubelet[3060]: I0209 09:46:33.272829 3060 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:46:33.273028 kubelet[3060]: I0209 09:46:33.272996 3060 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:46:33.279482 kubelet[3060]: I0209 09:46:33.279450 3060 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:46:33.279712 kubelet[3060]: I0209 09:46:33.279691 3060 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:46:33.279870 kubelet[3060]: I0209 09:46:33.279850 3060 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:46:33.280029 kubelet[3060]: I0209 09:46:33.280009 3060 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:46:33.287056 kubelet[3060]: I0209 09:46:33.287024 3060 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:46:33.288266 kubelet[3060]: I0209 09:46:33.288239 3060 server.go:1186] "Started kubelet" Feb 9 09:46:33.300739 kubelet[3060]: E0209 09:46:33.300704 3060 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:46:33.301018 kubelet[3060]: E0209 09:46:33.300998 3060 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:46:33.302321 kubelet[3060]: I0209 09:46:33.302276 3060 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:46:33.302876 kubelet[3060]: I0209 09:46:33.302824 3060 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:46:33.304252 kubelet[3060]: I0209 09:46:33.304226 3060 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:46:33.342856 kubelet[3060]: I0209 09:46:33.342818 3060 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:46:33.343586 kubelet[3060]: I0209 09:46:33.343557 3060 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:46:33.448746 kubelet[3060]: I0209 09:46:33.448716 3060 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-76" Feb 9 09:46:33.473108 kubelet[3060]: I0209 09:46:33.470512 3060 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-16-76" Feb 9 09:46:33.473108 kubelet[3060]: I0209 09:46:33.470663 3060 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-16-76" Feb 9 09:46:33.607332 kubelet[3060]: I0209 09:46:33.607218 3060 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:46:33.728880 kubelet[3060]: I0209 09:46:33.728841 3060 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:46:33.729141 kubelet[3060]: I0209 09:46:33.729118 3060 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:46:33.729271 kubelet[3060]: I0209 09:46:33.729251 3060 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:46:33.729710 kubelet[3060]: I0209 09:46:33.729687 3060 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 09:46:33.729839 kubelet[3060]: I0209 09:46:33.729819 3060 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 09:46:33.729961 kubelet[3060]: I0209 09:46:33.729941 3060 policy_none.go:49] "None policy: Start" Feb 9 09:46:33.731671 kubelet[3060]: I0209 09:46:33.731582 3060 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:46:33.731865 kubelet[3060]: I0209 09:46:33.731843 3060 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:46:33.732327 kubelet[3060]: I0209 09:46:33.732304 3060 state_mem.go:75] "Updated machine memory state" Feb 9 09:46:33.737973 kubelet[3060]: I0209 09:46:33.737939 3060 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:46:33.742789 kubelet[3060]: I0209 09:46:33.742756 3060 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:46:33.742976 kubelet[3060]: I0209 09:46:33.742955 3060 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:46:33.743099 kubelet[3060]: I0209 09:46:33.743079 3060 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:46:33.743260 kubelet[3060]: E0209 09:46:33.743242 3060 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:46:33.744268 kubelet[3060]: I0209 09:46:33.744237 3060 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:46:33.844497 kubelet[3060]: I0209 09:46:33.844453 3060 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:46:33.845077 kubelet[3060]: I0209 09:46:33.845010 3060 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:46:33.845440 kubelet[3060]: I0209 09:46:33.845389 3060 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:46:33.868935 kubelet[3060]: E0209 09:46:33.868754 3060 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-16-76\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-76" Feb 9 09:46:33.960475 kubelet[3060]: I0209 09:46:33.960380 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a743495506f57fe8ce12d2054e3d7d9-ca-certs\") pod \"kube-apiserver-ip-172-31-16-76\" (UID: \"7a743495506f57fe8ce12d2054e3d7d9\") " pod="kube-system/kube-apiserver-ip-172-31-16-76" Feb 9 09:46:33.960764 kubelet[3060]: I0209 09:46:33.960730 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a1c51f6558c9b9fb3d526ef30aad8db-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-76\" (UID: \"9a1c51f6558c9b9fb3d526ef30aad8db\") " pod="kube-system/kube-controller-manager-ip-172-31-16-76" Feb 9 09:46:33.960942 kubelet[3060]: I0209 09:46:33.960922 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a1c51f6558c9b9fb3d526ef30aad8db-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-76\" (UID: \"9a1c51f6558c9b9fb3d526ef30aad8db\") " pod="kube-system/kube-controller-manager-ip-172-31-16-76" Feb 9 09:46:33.961158 kubelet[3060]: I0209 09:46:33.961139 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0800b07609d1f2f4d97d0a1d19ea4611-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-76\" (UID: \"0800b07609d1f2f4d97d0a1d19ea4611\") " pod="kube-system/kube-scheduler-ip-172-31-16-76" Feb 9 09:46:33.961339 kubelet[3060]: I0209 09:46:33.961321 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a743495506f57fe8ce12d2054e3d7d9-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-76\" (UID: \"7a743495506f57fe8ce12d2054e3d7d9\") " pod="kube-system/kube-apiserver-ip-172-31-16-76" Feb 9 09:46:33.961527 kubelet[3060]: I0209 09:46:33.961508 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a743495506f57fe8ce12d2054e3d7d9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-76\" (UID: \"7a743495506f57fe8ce12d2054e3d7d9\") " pod="kube-system/kube-apiserver-ip-172-31-16-76" Feb 9 09:46:33.961757 kubelet[3060]: I0209 09:46:33.961737 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a1c51f6558c9b9fb3d526ef30aad8db-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-76\" (UID: \"9a1c51f6558c9b9fb3d526ef30aad8db\") " pod="kube-system/kube-controller-manager-ip-172-31-16-76" Feb 9 09:46:33.961961 kubelet[3060]: I0209 09:46:33.961941 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a1c51f6558c9b9fb3d526ef30aad8db-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-76\" (UID: \"9a1c51f6558c9b9fb3d526ef30aad8db\") " pod="kube-system/kube-controller-manager-ip-172-31-16-76" Feb 9 09:46:33.962150 kubelet[3060]: I0209 09:46:33.962131 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a1c51f6558c9b9fb3d526ef30aad8db-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-76\" (UID: \"9a1c51f6558c9b9fb3d526ef30aad8db\") " pod="kube-system/kube-controller-manager-ip-172-31-16-76" Feb 9 09:46:34.291705 kubelet[3060]: I0209 09:46:34.291662 3060 apiserver.go:52] "Watching apiserver" Feb 9 09:46:34.323408 sudo[3071]: pam_unix(sudo:session): session closed for user root Feb 9 09:46:34.346277 kubelet[3060]: I0209 09:46:34.344576 3060 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:46:34.365003 kubelet[3060]: I0209 09:46:34.364937 3060 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:46:34.796504 kubelet[3060]: E0209 09:46:34.796462 3060 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-16-76\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-76" Feb 9 09:46:34.889535 kubelet[3060]: E0209 09:46:34.889500 3060 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-16-76\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-76" Feb 9 09:46:35.088829 kubelet[3060]: E0209 09:46:35.088705 3060 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-16-76\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-76" Feb 9 09:46:35.692486 kubelet[3060]: I0209 09:46:35.692434 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-76" podStartSLOduration=2.692305791 pod.CreationTimestamp="2024-02-09 09:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:46:35.307148665 +0000 UTC m=+2.278059737" watchObservedRunningTime="2024-02-09 09:46:35.692305791 +0000 UTC m=+2.663216839" Feb 9 09:46:35.693168 kubelet[3060]: I0209 09:46:35.692648 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-76" podStartSLOduration=2.692555096 pod.CreationTimestamp="2024-02-09 09:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:46:35.690896965 +0000 UTC m=+2.661808013" watchObservedRunningTime="2024-02-09 09:46:35.692555096 +0000 UTC m=+2.663466156" Feb 9 09:46:37.971353 sudo[2062]: pam_unix(sudo:session): session closed for user root Feb 9 09:46:37.995649 sshd[2058]: pam_unix(sshd:session): session closed for user core Feb 9 09:46:38.002448 systemd[1]: sshd@4-172.31.16.76:22-139.178.89.65:34494.service: Deactivated successfully. Feb 9 09:46:38.004040 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:46:38.004956 systemd-logind[1794]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:46:38.007594 systemd-logind[1794]: Removed session 5. Feb 9 09:46:38.785507 kubelet[3060]: I0209 09:46:38.785444 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-76" podStartSLOduration=6.78534058 pod.CreationTimestamp="2024-02-09 09:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:46:36.092972422 +0000 UTC m=+3.063883470" watchObservedRunningTime="2024-02-09 09:46:38.78534058 +0000 UTC m=+5.756251616" Feb 9 09:46:46.491231 kubelet[3060]: I0209 09:46:46.491176 3060 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 09:46:46.492999 env[1817]: time="2024-02-09T09:46:46.492921853Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:46:46.494509 kubelet[3060]: I0209 09:46:46.494477 3060 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 09:46:46.888171 kubelet[3060]: I0209 09:46:46.888008 3060 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:46:46.959738 kubelet[3060]: I0209 09:46:46.959686 3060 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:46:47.073973 kubelet[3060]: I0209 09:46:47.073926 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e61b558-5a77-4234-b4da-5f9f0868cebd-kube-proxy\") pod \"kube-proxy-ws5vh\" (UID: \"7e61b558-5a77-4234-b4da-5f9f0868cebd\") " pod="kube-system/kube-proxy-ws5vh" Feb 9 09:46:47.074320 kubelet[3060]: I0209 09:46:47.074293 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e61b558-5a77-4234-b4da-5f9f0868cebd-xtables-lock\") pod \"kube-proxy-ws5vh\" (UID: \"7e61b558-5a77-4234-b4da-5f9f0868cebd\") " pod="kube-system/kube-proxy-ws5vh" Feb 9 09:46:47.074576 kubelet[3060]: I0209 09:46:47.074510 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg2hn\" (UniqueName: \"kubernetes.io/projected/7e61b558-5a77-4234-b4da-5f9f0868cebd-kube-api-access-zg2hn\") pod \"kube-proxy-ws5vh\" (UID: \"7e61b558-5a77-4234-b4da-5f9f0868cebd\") " pod="kube-system/kube-proxy-ws5vh" Feb 9 09:46:47.075013 kubelet[3060]: I0209 09:46:47.074928 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e61b558-5a77-4234-b4da-5f9f0868cebd-lib-modules\") pod \"kube-proxy-ws5vh\" (UID: \"7e61b558-5a77-4234-b4da-5f9f0868cebd\") " pod="kube-system/kube-proxy-ws5vh" Feb 9 09:46:47.088239 kubelet[3060]: I0209 09:46:47.088179 3060 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:46:47.176001 kubelet[3060]: I0209 09:46:47.175822 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-etc-cni-netd\") pod \"cilium-8nf2h\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " pod="kube-system/cilium-8nf2h" Feb 9 09:46:47.176313 kubelet[3060]: I0209 09:46:47.176288 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/250ab6e2-51be-4971-8f35-d678ee2fcd86-cilium-config-path\") pod \"cilium-8nf2h\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " pod="kube-system/cilium-8nf2h" Feb 9 09:46:47.176565 kubelet[3060]: I0209 09:46:47.176482 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nblfw\" (UniqueName: \"kubernetes.io/projected/250ab6e2-51be-4971-8f35-d678ee2fcd86-kube-api-access-nblfw\") pod \"cilium-8nf2h\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " pod="kube-system/cilium-8nf2h" Feb 9 09:46:47.176960 kubelet[3060]: I0209 09:46:47.176903 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-cilium-run\") pod \"cilium-8nf2h\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " pod="kube-system/cilium-8nf2h" Feb 9 09:46:47.177066 kubelet[3060]: I0209 09:46:47.176972 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-xtables-lock\") pod \"cilium-8nf2h\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " pod="kube-system/cilium-8nf2h" Feb 9 09:46:47.177066 kubelet[3060]: I0209 09:46:47.177042 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-hostproc\") pod \"cilium-8nf2h\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " pod="kube-system/cilium-8nf2h" Feb 9 09:46:47.177195 kubelet[3060]: I0209 09:46:47.177086 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-cilium-cgroup\") pod \"cilium-8nf2h\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " pod="kube-system/cilium-8nf2h" Feb 9 09:46:47.177195 kubelet[3060]: I0209 09:46:47.177129 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-cni-path\") pod \"cilium-8nf2h\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " pod="kube-system/cilium-8nf2h" Feb 9 09:46:47.177195 kubelet[3060]: I0209 09:46:47.177174 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-host-proc-sys-kernel\") pod \"cilium-8nf2h\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " pod="kube-system/cilium-8nf2h" Feb 9 09:46:47.177382 kubelet[3060]: I0209 09:46:47.177245 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/250ab6e2-51be-4971-8f35-d678ee2fcd86-clustermesh-secrets\") pod \"cilium-8nf2h\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " pod="kube-system/cilium-8nf2h" Feb 9 09:46:47.177382 kubelet[3060]: I0209 09:46:47.177290 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/250ab6e2-51be-4971-8f35-d678ee2fcd86-hubble-tls\") pod \"cilium-8nf2h\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " pod="kube-system/cilium-8nf2h" Feb 9 09:46:47.177382 kubelet[3060]: I0209 09:46:47.177335 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-host-proc-sys-net\") pod \"cilium-8nf2h\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " pod="kube-system/cilium-8nf2h" Feb 9 09:46:47.177566 kubelet[3060]: I0209 09:46:47.177419 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-bpf-maps\") pod \"cilium-8nf2h\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " pod="kube-system/cilium-8nf2h" Feb 9 09:46:47.177566 kubelet[3060]: I0209 09:46:47.177463 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-lib-modules\") pod \"cilium-8nf2h\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " pod="kube-system/cilium-8nf2h" Feb 9 09:46:47.278690 kubelet[3060]: I0209 09:46:47.278571 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7xsd\" (UniqueName: \"kubernetes.io/projected/67648f94-8d6b-4f6c-b67a-2a1034407668-kube-api-access-b7xsd\") pod \"cilium-operator-f59cbd8c6-sfv82\" (UID: \"67648f94-8d6b-4f6c-b67a-2a1034407668\") " pod="kube-system/cilium-operator-f59cbd8c6-sfv82" Feb 9 09:46:47.279510 kubelet[3060]: I0209 09:46:47.279209 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67648f94-8d6b-4f6c-b67a-2a1034407668-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-sfv82\" (UID: \"67648f94-8d6b-4f6c-b67a-2a1034407668\") " pod="kube-system/cilium-operator-f59cbd8c6-sfv82" Feb 9 09:46:47.575501 env[1817]: time="2024-02-09T09:46:47.575422866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ws5vh,Uid:7e61b558-5a77-4234-b4da-5f9f0868cebd,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:47.586026 env[1817]: time="2024-02-09T09:46:47.585923119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8nf2h,Uid:250ab6e2-51be-4971-8f35-d678ee2fcd86,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:47.617049 env[1817]: time="2024-02-09T09:46:47.616917806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:47.617229 env[1817]: time="2024-02-09T09:46:47.617083276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:47.617229 env[1817]: time="2024-02-09T09:46:47.617168849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:47.618003 env[1817]: time="2024-02-09T09:46:47.617856913Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb7e6f85ad753795ba1402224bcb64696c0f4ad9aaa1bff097e717937b84fc96 pid=3166 runtime=io.containerd.runc.v2 Feb 9 09:46:47.624225 env[1817]: time="2024-02-09T09:46:47.624077052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:47.624225 env[1817]: time="2024-02-09T09:46:47.624163465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:47.624484 env[1817]: time="2024-02-09T09:46:47.624221750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:47.624708 env[1817]: time="2024-02-09T09:46:47.624588762Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f pid=3181 runtime=io.containerd.runc.v2 Feb 9 09:46:47.738022 env[1817]: time="2024-02-09T09:46:47.737965473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8nf2h,Uid:250ab6e2-51be-4971-8f35-d678ee2fcd86,Namespace:kube-system,Attempt:0,} returns sandbox id \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\"" Feb 9 09:46:47.742142 env[1817]: time="2024-02-09T09:46:47.742087509Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 09:46:47.752594 env[1817]: time="2024-02-09T09:46:47.752525408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ws5vh,Uid:7e61b558-5a77-4234-b4da-5f9f0868cebd,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb7e6f85ad753795ba1402224bcb64696c0f4ad9aaa1bff097e717937b84fc96\"" Feb 9 09:46:47.760578 env[1817]: time="2024-02-09T09:46:47.760524124Z" level=info msg="CreateContainer within sandbox \"bb7e6f85ad753795ba1402224bcb64696c0f4ad9aaa1bff097e717937b84fc96\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:46:47.804388 env[1817]: time="2024-02-09T09:46:47.804302738Z" level=info msg="CreateContainer within sandbox \"bb7e6f85ad753795ba1402224bcb64696c0f4ad9aaa1bff097e717937b84fc96\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"62d0ba6d51e21d465ce625954216759b16ce6a4099be9f066572eac0de3d4cc5\"" Feb 9 09:46:47.808013 env[1817]: time="2024-02-09T09:46:47.807779489Z" level=info msg="StartContainer for \"62d0ba6d51e21d465ce625954216759b16ce6a4099be9f066572eac0de3d4cc5\"" Feb 9 09:46:47.942653 env[1817]: time="2024-02-09T09:46:47.941858154Z" level=info msg="StartContainer for \"62d0ba6d51e21d465ce625954216759b16ce6a4099be9f066572eac0de3d4cc5\" returns successfully" Feb 9 09:46:48.000384 env[1817]: time="2024-02-09T09:46:48.000306888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-sfv82,Uid:67648f94-8d6b-4f6c-b67a-2a1034407668,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:48.035562 env[1817]: time="2024-02-09T09:46:48.035430821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:48.035562 env[1817]: time="2024-02-09T09:46:48.035495958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:48.035562 env[1817]: time="2024-02-09T09:46:48.035521998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:48.036873 env[1817]: time="2024-02-09T09:46:48.036723451Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f pid=3306 runtime=io.containerd.runc.v2 Feb 9 09:46:48.164166 env[1817]: time="2024-02-09T09:46:48.164098873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-sfv82,Uid:67648f94-8d6b-4f6c-b67a-2a1034407668,Namespace:kube-system,Attempt:0,} returns sandbox id \"564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f\"" Feb 9 09:46:48.838063 kubelet[3060]: I0209 09:46:48.838008 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ws5vh" podStartSLOduration=2.837924402 pod.CreationTimestamp="2024-02-09 09:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:46:48.837400872 +0000 UTC m=+15.808311932" watchObservedRunningTime="2024-02-09 09:46:48.837924402 +0000 UTC m=+15.808835450" Feb 9 09:46:54.945228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2368798732.mount: Deactivated successfully. Feb 9 09:46:59.017398 env[1817]: time="2024-02-09T09:46:59.017319557Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:59.020931 env[1817]: time="2024-02-09T09:46:59.020846038Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:59.024518 env[1817]: time="2024-02-09T09:46:59.024442791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:59.026113 env[1817]: time="2024-02-09T09:46:59.026050588Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 09:46:59.032360 env[1817]: time="2024-02-09T09:46:59.032293038Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 09:46:59.035130 env[1817]: time="2024-02-09T09:46:59.034994428Z" level=info msg="CreateContainer within sandbox \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:46:59.057892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2360798433.mount: Deactivated successfully. Feb 9 09:46:59.069758 env[1817]: time="2024-02-09T09:46:59.069672727Z" level=info msg="CreateContainer within sandbox \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d\"" Feb 9 09:46:59.073270 env[1817]: time="2024-02-09T09:46:59.072650383Z" level=info msg="StartContainer for \"de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d\"" Feb 9 09:46:59.209929 env[1817]: time="2024-02-09T09:46:59.209854664Z" level=info msg="StartContainer for \"de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d\" returns successfully" Feb 9 09:47:00.050508 systemd[1]: run-containerd-runc-k8s.io-de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d-runc.z4d1Pp.mount: Deactivated successfully. Feb 9 09:47:00.050865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d-rootfs.mount: Deactivated successfully. Feb 9 09:47:00.164151 env[1817]: time="2024-02-09T09:47:00.164083696Z" level=info msg="shim disconnected" id=de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d Feb 9 09:47:00.164962 env[1817]: time="2024-02-09T09:47:00.164917474Z" level=warning msg="cleaning up after shim disconnected" id=de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d namespace=k8s.io Feb 9 09:47:00.165107 env[1817]: time="2024-02-09T09:47:00.165062915Z" level=info msg="cleaning up dead shim" Feb 9 09:47:00.188452 env[1817]: time="2024-02-09T09:47:00.188378871Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3474 runtime=io.containerd.runc.v2\n" Feb 9 09:47:00.850401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1315924575.mount: Deactivated successfully. Feb 9 09:47:00.869440 env[1817]: time="2024-02-09T09:47:00.869312999Z" level=info msg="CreateContainer within sandbox \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:47:00.928271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618144326.mount: Deactivated successfully. Feb 9 09:47:00.939794 env[1817]: time="2024-02-09T09:47:00.939696973Z" level=info msg="CreateContainer within sandbox \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f\"" Feb 9 09:47:00.944326 env[1817]: time="2024-02-09T09:47:00.944244613Z" level=info msg="StartContainer for \"09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f\"" Feb 9 09:47:01.105471 env[1817]: time="2024-02-09T09:47:01.098031183Z" level=info msg="StartContainer for \"09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f\" returns successfully" Feb 9 09:47:01.112323 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:47:01.112954 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:47:01.113635 systemd[1]: Stopping systemd-sysctl.service... Feb 9 09:47:01.116981 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:47:01.122893 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:47:01.163398 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:47:01.202795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f-rootfs.mount: Deactivated successfully. Feb 9 09:47:01.231812 env[1817]: time="2024-02-09T09:47:01.231736669Z" level=info msg="shim disconnected" id=09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f Feb 9 09:47:01.232471 env[1817]: time="2024-02-09T09:47:01.231822325Z" level=warning msg="cleaning up after shim disconnected" id=09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f namespace=k8s.io Feb 9 09:47:01.232471 env[1817]: time="2024-02-09T09:47:01.231846338Z" level=info msg="cleaning up dead shim" Feb 9 09:47:01.251445 env[1817]: time="2024-02-09T09:47:01.251376380Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3543 runtime=io.containerd.runc.v2\n" Feb 9 09:47:01.891393 env[1817]: time="2024-02-09T09:47:01.891221896Z" level=info msg="CreateContainer within sandbox \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:47:01.947986 env[1817]: time="2024-02-09T09:47:01.947902688Z" level=info msg="CreateContainer within sandbox \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847\"" Feb 9 09:47:01.952338 env[1817]: time="2024-02-09T09:47:01.952267254Z" level=info msg="StartContainer for \"7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847\"" Feb 9 09:47:02.028806 env[1817]: time="2024-02-09T09:47:02.028735306Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:02.032433 env[1817]: time="2024-02-09T09:47:02.032349505Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:02.039243 env[1817]: time="2024-02-09T09:47:02.039162617Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:02.049053 env[1817]: time="2024-02-09T09:47:02.048977947Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 09:47:02.053036 env[1817]: time="2024-02-09T09:47:02.052966561Z" level=info msg="CreateContainer within sandbox \"564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 09:47:02.056032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1459833542.mount: Deactivated successfully. Feb 9 09:47:02.104733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3026896056.mount: Deactivated successfully. Feb 9 09:47:02.120230 env[1817]: time="2024-02-09T09:47:02.115580765Z" level=info msg="CreateContainer within sandbox \"564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4\"" Feb 9 09:47:02.120851 env[1817]: time="2024-02-09T09:47:02.120777692Z" level=info msg="StartContainer for \"6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4\"" Feb 9 09:47:02.132559 env[1817]: time="2024-02-09T09:47:02.132482533Z" level=info msg="StartContainer for \"7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847\" returns successfully" Feb 9 09:47:02.281499 env[1817]: time="2024-02-09T09:47:02.281393132Z" level=info msg="StartContainer for \"6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4\" returns successfully" Feb 9 09:47:02.410161 env[1817]: time="2024-02-09T09:47:02.410096358Z" level=info msg="shim disconnected" id=7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847 Feb 9 09:47:02.410560 env[1817]: time="2024-02-09T09:47:02.410498313Z" level=warning msg="cleaning up after shim disconnected" id=7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847 namespace=k8s.io Feb 9 09:47:02.410767 env[1817]: time="2024-02-09T09:47:02.410727635Z" level=info msg="cleaning up dead shim" Feb 9 09:47:02.434299 env[1817]: time="2024-02-09T09:47:02.434243468Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3638 runtime=io.containerd.runc.v2\n" Feb 9 09:47:02.885093 env[1817]: time="2024-02-09T09:47:02.885034500Z" level=info msg="CreateContainer within sandbox \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:47:02.920998 env[1817]: time="2024-02-09T09:47:02.920906622Z" level=info msg="CreateContainer within sandbox \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683\"" Feb 9 09:47:02.922521 env[1817]: time="2024-02-09T09:47:02.922453530Z" level=info msg="StartContainer for \"8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683\"" Feb 9 09:47:03.060498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847-rootfs.mount: Deactivated successfully. Feb 9 09:47:03.131293 env[1817]: time="2024-02-09T09:47:03.131226033Z" level=info msg="StartContainer for \"8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683\" returns successfully" Feb 9 09:47:03.188095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683-rootfs.mount: Deactivated successfully. Feb 9 09:47:03.202982 env[1817]: time="2024-02-09T09:47:03.202905539Z" level=info msg="shim disconnected" id=8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683 Feb 9 09:47:03.203365 env[1817]: time="2024-02-09T09:47:03.203304182Z" level=warning msg="cleaning up after shim disconnected" id=8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683 namespace=k8s.io Feb 9 09:47:03.203576 env[1817]: time="2024-02-09T09:47:03.203526915Z" level=info msg="cleaning up dead shim" Feb 9 09:47:03.249431 env[1817]: time="2024-02-09T09:47:03.249358050Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3691 runtime=io.containerd.runc.v2\n" Feb 9 09:47:03.902554 env[1817]: time="2024-02-09T09:47:03.902471697Z" level=info msg="CreateContainer within sandbox \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:47:03.935669 kubelet[3060]: I0209 09:47:03.935169 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-sfv82" podStartSLOduration=-9.22337201891966e+09 pod.CreationTimestamp="2024-02-09 09:46:46 +0000 UTC" firstStartedPulling="2024-02-09 09:46:48.166218768 +0000 UTC m=+15.137129816" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:02.987969632 +0000 UTC m=+29.958880692" watchObservedRunningTime="2024-02-09 09:47:03.935115203 +0000 UTC m=+30.906026239" Feb 9 09:47:03.966797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1298235679.mount: Deactivated successfully. Feb 9 09:47:03.999328 env[1817]: time="2024-02-09T09:47:03.994711975Z" level=info msg="CreateContainer within sandbox \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9\"" Feb 9 09:47:04.010263 env[1817]: time="2024-02-09T09:47:04.003893698Z" level=info msg="StartContainer for \"3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9\"" Feb 9 09:47:04.055866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount552975959.mount: Deactivated successfully. Feb 9 09:47:04.102337 systemd[1]: run-containerd-runc-k8s.io-3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9-runc.TaMZPE.mount: Deactivated successfully. Feb 9 09:47:04.340525 env[1817]: time="2024-02-09T09:47:04.340447047Z" level=info msg="StartContainer for \"3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9\" returns successfully" Feb 9 09:47:04.637105 kubelet[3060]: I0209 09:47:04.636070 3060 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:47:04.650651 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:47:04.689099 kubelet[3060]: I0209 09:47:04.689030 3060 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:04.691483 kubelet[3060]: I0209 09:47:04.691420 3060 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:04.759300 kubelet[3060]: I0209 09:47:04.759182 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4a6c137-edb0-42cc-ad2e-18e31091215e-config-volume\") pod \"coredns-787d4945fb-rzl7z\" (UID: \"e4a6c137-edb0-42cc-ad2e-18e31091215e\") " pod="kube-system/coredns-787d4945fb-rzl7z" Feb 9 09:47:04.759499 kubelet[3060]: I0209 09:47:04.759358 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60cba132-cc4a-48b5-ade2-b1713eacb67a-config-volume\") pod \"coredns-787d4945fb-wsqfz\" (UID: \"60cba132-cc4a-48b5-ade2-b1713eacb67a\") " pod="kube-system/coredns-787d4945fb-wsqfz" Feb 9 09:47:04.759499 kubelet[3060]: I0209 09:47:04.759484 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd5gg\" (UniqueName: \"kubernetes.io/projected/e4a6c137-edb0-42cc-ad2e-18e31091215e-kube-api-access-dd5gg\") pod \"coredns-787d4945fb-rzl7z\" (UID: \"e4a6c137-edb0-42cc-ad2e-18e31091215e\") " pod="kube-system/coredns-787d4945fb-rzl7z" Feb 9 09:47:04.759707 kubelet[3060]: I0209 09:47:04.759672 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f24zb\" (UniqueName: \"kubernetes.io/projected/60cba132-cc4a-48b5-ade2-b1713eacb67a-kube-api-access-f24zb\") pod \"coredns-787d4945fb-wsqfz\" (UID: \"60cba132-cc4a-48b5-ade2-b1713eacb67a\") " pod="kube-system/coredns-787d4945fb-wsqfz" Feb 9 09:47:05.005590 env[1817]: time="2024-02-09T09:47:05.005508644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-rzl7z,Uid:e4a6c137-edb0-42cc-ad2e-18e31091215e,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:05.023515 env[1817]: time="2024-02-09T09:47:05.022967712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-wsqfz,Uid:60cba132-cc4a-48b5-ade2-b1713eacb67a,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:05.617651 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:47:07.494856 systemd-networkd[1594]: cilium_host: Link UP Feb 9 09:47:07.501943 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 09:47:07.502035 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 09:47:07.502534 (udev-worker)[3809]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:07.508735 systemd-networkd[1594]: cilium_net: Link UP Feb 9 09:47:07.509720 systemd-networkd[1594]: cilium_net: Gained carrier Feb 9 09:47:07.514236 (udev-worker)[3846]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:07.517194 systemd-networkd[1594]: cilium_host: Gained carrier Feb 9 09:47:07.619122 systemd-networkd[1594]: cilium_net: Gained IPv6LL Feb 9 09:47:07.684287 (udev-worker)[3858]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:07.695709 systemd-networkd[1594]: cilium_vxlan: Link UP Feb 9 09:47:07.695722 systemd-networkd[1594]: cilium_vxlan: Gained carrier Feb 9 09:47:07.739133 systemd-networkd[1594]: cilium_host: Gained IPv6LL Feb 9 09:47:08.217638 kernel: NET: Registered PF_ALG protocol family Feb 9 09:47:09.607412 (udev-worker)[3856]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:09.634273 systemd-networkd[1594]: lxc_health: Link UP Feb 9 09:47:09.639275 kubelet[3060]: I0209 09:47:09.639176 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8nf2h" podStartSLOduration=-9.223372013215693e+09 pod.CreationTimestamp="2024-02-09 09:46:46 +0000 UTC" firstStartedPulling="2024-02-09 09:46:47.740360233 +0000 UTC m=+14.711271269" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:04.946490021 +0000 UTC m=+31.917401069" watchObservedRunningTime="2024-02-09 09:47:09.63908419 +0000 UTC m=+36.609995262" Feb 9 09:47:09.648401 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:47:09.647578 systemd-networkd[1594]: lxc_health: Gained carrier Feb 9 09:47:09.732342 systemd-networkd[1594]: cilium_vxlan: Gained IPv6LL Feb 9 09:47:10.166765 systemd-networkd[1594]: lxcc577d1534d7e: Link UP Feb 9 09:47:10.173670 kernel: eth0: renamed from tmpbb5af Feb 9 09:47:10.181069 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc577d1534d7e: link becomes ready Feb 9 09:47:10.180725 systemd-networkd[1594]: lxcc577d1534d7e: Gained carrier Feb 9 09:47:10.225813 systemd-networkd[1594]: lxc29c6ffee3e92: Link UP Feb 9 09:47:10.233646 kernel: eth0: renamed from tmp21655 Feb 9 09:47:10.250014 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc29c6ffee3e92: link becomes ready Feb 9 09:47:10.249800 systemd-networkd[1594]: lxc29c6ffee3e92: Gained carrier Feb 9 09:47:11.331355 systemd-networkd[1594]: lxc_health: Gained IPv6LL Feb 9 09:47:11.779265 systemd-networkd[1594]: lxc29c6ffee3e92: Gained IPv6LL Feb 9 09:47:11.971345 systemd-networkd[1594]: lxcc577d1534d7e: Gained IPv6LL Feb 9 09:47:19.200832 env[1817]: time="2024-02-09T09:47:19.183341036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:19.200832 env[1817]: time="2024-02-09T09:47:19.183555886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:19.200832 env[1817]: time="2024-02-09T09:47:19.183582238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:19.200832 env[1817]: time="2024-02-09T09:47:19.184239770Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb5afde088a6b04b772a07c88bde31bdab5b2bfa05d22dd7b671b4443c5cbf5a pid=4222 runtime=io.containerd.runc.v2 Feb 9 09:47:19.238355 systemd[1]: run-containerd-runc-k8s.io-bb5afde088a6b04b772a07c88bde31bdab5b2bfa05d22dd7b671b4443c5cbf5a-runc.9Il0GS.mount: Deactivated successfully. Feb 9 09:47:19.279644 env[1817]: time="2024-02-09T09:47:19.279485560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:19.279819 env[1817]: time="2024-02-09T09:47:19.279679685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:19.279819 env[1817]: time="2024-02-09T09:47:19.279765269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:19.280714 env[1817]: time="2024-02-09T09:47:19.280477798Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21655dd7823718db86400c69236b88e1e89dc802a407afec33e69b4fafb183c4 pid=4258 runtime=io.containerd.runc.v2 Feb 9 09:47:19.421875 env[1817]: time="2024-02-09T09:47:19.421777941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-rzl7z,Uid:e4a6c137-edb0-42cc-ad2e-18e31091215e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb5afde088a6b04b772a07c88bde31bdab5b2bfa05d22dd7b671b4443c5cbf5a\"" Feb 9 09:47:19.430661 env[1817]: time="2024-02-09T09:47:19.429985918Z" level=info msg="CreateContainer within sandbox \"bb5afde088a6b04b772a07c88bde31bdab5b2bfa05d22dd7b671b4443c5cbf5a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:47:19.482347 env[1817]: time="2024-02-09T09:47:19.482103752Z" level=info msg="CreateContainer within sandbox \"bb5afde088a6b04b772a07c88bde31bdab5b2bfa05d22dd7b671b4443c5cbf5a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b17add93108b1e731627883e9a9de629807c7893e3a7ef429095e5de66928651\"" Feb 9 09:47:19.485775 env[1817]: time="2024-02-09T09:47:19.485441631Z" level=info msg="StartContainer for \"b17add93108b1e731627883e9a9de629807c7893e3a7ef429095e5de66928651\"" Feb 9 09:47:19.496834 env[1817]: time="2024-02-09T09:47:19.496758743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-wsqfz,Uid:60cba132-cc4a-48b5-ade2-b1713eacb67a,Namespace:kube-system,Attempt:0,} returns sandbox id \"21655dd7823718db86400c69236b88e1e89dc802a407afec33e69b4fafb183c4\"" Feb 9 09:47:19.508743 env[1817]: time="2024-02-09T09:47:19.508118850Z" level=info msg="CreateContainer within sandbox \"21655dd7823718db86400c69236b88e1e89dc802a407afec33e69b4fafb183c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:47:19.540973 env[1817]: time="2024-02-09T09:47:19.540893157Z" level=info msg="CreateContainer within sandbox \"21655dd7823718db86400c69236b88e1e89dc802a407afec33e69b4fafb183c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd8651248da1253ffcaf99dc991cec015bf1ce1ddcc39be0e961bdbac14b052e\"" Feb 9 09:47:19.546026 env[1817]: time="2024-02-09T09:47:19.545743694Z" level=info msg="StartContainer for \"dd8651248da1253ffcaf99dc991cec015bf1ce1ddcc39be0e961bdbac14b052e\"" Feb 9 09:47:19.763063 env[1817]: time="2024-02-09T09:47:19.762906872Z" level=info msg="StartContainer for \"dd8651248da1253ffcaf99dc991cec015bf1ce1ddcc39be0e961bdbac14b052e\" returns successfully" Feb 9 09:47:19.787637 env[1817]: time="2024-02-09T09:47:19.787541098Z" level=info msg="StartContainer for \"b17add93108b1e731627883e9a9de629807c7893e3a7ef429095e5de66928651\" returns successfully" Feb 9 09:47:19.992775 kubelet[3060]: I0209 09:47:19.992727 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-wsqfz" podStartSLOduration=33.992670533 pod.CreationTimestamp="2024-02-09 09:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:19.97734535 +0000 UTC m=+46.948256422" watchObservedRunningTime="2024-02-09 09:47:19.992670533 +0000 UTC m=+46.963581617" Feb 9 09:47:20.988463 kubelet[3060]: I0209 09:47:20.988390 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-rzl7z" podStartSLOduration=34.98833476 pod.CreationTimestamp="2024-02-09 09:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:20.015050474 +0000 UTC m=+46.985961534" watchObservedRunningTime="2024-02-09 09:47:20.98833476 +0000 UTC m=+47.959245808" Feb 9 09:47:37.368429 systemd[1]: Started sshd@5-172.31.16.76:22-139.178.89.65:33260.service. Feb 9 09:47:37.552298 sshd[4461]: Accepted publickey for core from 139.178.89.65 port 33260 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:37.555162 sshd[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:37.565242 systemd-logind[1794]: New session 6 of user core. Feb 9 09:47:37.566390 systemd[1]: Started session-6.scope. Feb 9 09:47:37.882852 sshd[4461]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:37.888595 systemd-logind[1794]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:47:37.889359 systemd[1]: sshd@5-172.31.16.76:22-139.178.89.65:33260.service: Deactivated successfully. Feb 9 09:47:37.891082 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:47:37.895036 systemd-logind[1794]: Removed session 6. Feb 9 09:47:42.909696 systemd[1]: Started sshd@6-172.31.16.76:22-139.178.89.65:49860.service. Feb 9 09:47:43.081831 sshd[4475]: Accepted publickey for core from 139.178.89.65 port 49860 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:43.087797 sshd[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:43.096286 systemd-logind[1794]: New session 7 of user core. Feb 9 09:47:43.097414 systemd[1]: Started session-7.scope. Feb 9 09:47:43.348031 sshd[4475]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:43.353454 systemd-logind[1794]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:47:43.354494 systemd[1]: sshd@6-172.31.16.76:22-139.178.89.65:49860.service: Deactivated successfully. Feb 9 09:47:43.357042 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:47:43.360296 systemd-logind[1794]: Removed session 7. Feb 9 09:47:48.373717 systemd[1]: Started sshd@7-172.31.16.76:22-139.178.89.65:57110.service. Feb 9 09:47:48.549801 sshd[4491]: Accepted publickey for core from 139.178.89.65 port 57110 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:48.552203 sshd[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:48.561575 systemd[1]: Started session-8.scope. Feb 9 09:47:48.561835 systemd-logind[1794]: New session 8 of user core. Feb 9 09:47:48.806272 sshd[4491]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:48.811153 systemd-logind[1794]: Session 8 logged out. Waiting for processes to exit. Feb 9 09:47:48.811773 systemd[1]: sshd@7-172.31.16.76:22-139.178.89.65:57110.service: Deactivated successfully. Feb 9 09:47:48.813944 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 09:47:48.818727 systemd-logind[1794]: Removed session 8. Feb 9 09:47:53.829203 systemd[1]: Started sshd@8-172.31.16.76:22-139.178.89.65:57116.service. Feb 9 09:47:53.998576 sshd[4505]: Accepted publickey for core from 139.178.89.65 port 57116 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:54.002034 sshd[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:54.010825 systemd-logind[1794]: New session 9 of user core. Feb 9 09:47:54.011583 systemd[1]: Started session-9.scope. Feb 9 09:47:54.264561 sshd[4505]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:54.272375 systemd[1]: sshd@8-172.31.16.76:22-139.178.89.65:57116.service: Deactivated successfully. Feb 9 09:47:54.274882 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 09:47:54.275148 systemd-logind[1794]: Session 9 logged out. Waiting for processes to exit. Feb 9 09:47:54.278354 systemd-logind[1794]: Removed session 9. Feb 9 09:47:59.291464 systemd[1]: Started sshd@9-172.31.16.76:22-139.178.89.65:38932.service. Feb 9 09:47:59.468167 sshd[4518]: Accepted publickey for core from 139.178.89.65 port 38932 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:59.470971 sshd[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:59.481791 systemd[1]: Started session-10.scope. Feb 9 09:47:59.484003 systemd-logind[1794]: New session 10 of user core. Feb 9 09:47:59.748826 sshd[4518]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:59.754280 systemd-logind[1794]: Session 10 logged out. Waiting for processes to exit. Feb 9 09:47:59.754758 systemd[1]: sshd@9-172.31.16.76:22-139.178.89.65:38932.service: Deactivated successfully. Feb 9 09:47:59.757770 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 09:47:59.760641 systemd-logind[1794]: Removed session 10. Feb 9 09:48:04.774162 systemd[1]: Started sshd@10-172.31.16.76:22-139.178.89.65:38944.service. Feb 9 09:48:04.960363 sshd[4532]: Accepted publickey for core from 139.178.89.65 port 38944 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:04.963162 sshd[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:04.973474 systemd-logind[1794]: New session 11 of user core. Feb 9 09:48:04.974687 systemd[1]: Started session-11.scope. Feb 9 09:48:05.228038 sshd[4532]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:05.234509 systemd[1]: sshd@10-172.31.16.76:22-139.178.89.65:38944.service: Deactivated successfully. Feb 9 09:48:05.236685 systemd-logind[1794]: Session 11 logged out. Waiting for processes to exit. Feb 9 09:48:05.238450 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 09:48:05.242020 systemd-logind[1794]: Removed session 11. Feb 9 09:48:05.253296 systemd[1]: Started sshd@11-172.31.16.76:22-139.178.89.65:38946.service. Feb 9 09:48:05.427168 sshd[4546]: Accepted publickey for core from 139.178.89.65 port 38946 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:05.429827 sshd[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:05.439401 systemd-logind[1794]: New session 12 of user core. Feb 9 09:48:05.439547 systemd[1]: Started session-12.scope. Feb 9 09:48:07.257111 sshd[4546]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:07.264843 systemd[1]: sshd@11-172.31.16.76:22-139.178.89.65:38946.service: Deactivated successfully. Feb 9 09:48:07.266722 systemd-logind[1794]: Session 12 logged out. Waiting for processes to exit. Feb 9 09:48:07.267867 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 09:48:07.270695 systemd-logind[1794]: Removed session 12. Feb 9 09:48:07.285717 systemd[1]: Started sshd@12-172.31.16.76:22-139.178.89.65:38962.service. Feb 9 09:48:07.485227 sshd[4557]: Accepted publickey for core from 139.178.89.65 port 38962 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:07.487976 sshd[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:07.498796 systemd-logind[1794]: New session 13 of user core. Feb 9 09:48:07.499806 systemd[1]: Started session-13.scope. Feb 9 09:48:07.755959 sshd[4557]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:07.762807 systemd-logind[1794]: Session 13 logged out. Waiting for processes to exit. Feb 9 09:48:07.764949 systemd[1]: sshd@12-172.31.16.76:22-139.178.89.65:38962.service: Deactivated successfully. Feb 9 09:48:07.767735 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 09:48:07.771312 systemd-logind[1794]: Removed session 13. Feb 9 09:48:12.782499 systemd[1]: Started sshd@13-172.31.16.76:22-139.178.89.65:55736.service. Feb 9 09:48:12.962149 sshd[4571]: Accepted publickey for core from 139.178.89.65 port 55736 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:12.965541 sshd[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:12.975022 systemd-logind[1794]: New session 14 of user core. Feb 9 09:48:12.976007 systemd[1]: Started session-14.scope. Feb 9 09:48:13.225993 sshd[4571]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:13.231223 systemd[1]: sshd@13-172.31.16.76:22-139.178.89.65:55736.service: Deactivated successfully. Feb 9 09:48:13.234465 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 09:48:13.235105 systemd-logind[1794]: Session 14 logged out. Waiting for processes to exit. Feb 9 09:48:13.237721 systemd-logind[1794]: Removed session 14. Feb 9 09:48:18.253157 systemd[1]: Started sshd@14-172.31.16.76:22-139.178.89.65:36552.service. Feb 9 09:48:18.432637 sshd[4586]: Accepted publickey for core from 139.178.89.65 port 36552 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:18.435837 sshd[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:18.445190 systemd[1]: Started session-15.scope. Feb 9 09:48:18.445877 systemd-logind[1794]: New session 15 of user core. Feb 9 09:48:18.695121 sshd[4586]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:18.700055 systemd-logind[1794]: Session 15 logged out. Waiting for processes to exit. Feb 9 09:48:18.700694 systemd[1]: sshd@14-172.31.16.76:22-139.178.89.65:36552.service: Deactivated successfully. Feb 9 09:48:18.702490 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 09:48:18.704594 systemd-logind[1794]: Removed session 15. Feb 9 09:48:23.720784 systemd[1]: Started sshd@15-172.31.16.76:22-139.178.89.65:36554.service. Feb 9 09:48:23.894894 sshd[4599]: Accepted publickey for core from 139.178.89.65 port 36554 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:23.898285 sshd[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:23.908010 systemd[1]: Started session-16.scope. Feb 9 09:48:23.908809 systemd-logind[1794]: New session 16 of user core. Feb 9 09:48:24.158036 sshd[4599]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:24.163886 systemd[1]: sshd@15-172.31.16.76:22-139.178.89.65:36554.service: Deactivated successfully. Feb 9 09:48:24.166300 systemd-logind[1794]: Session 16 logged out. Waiting for processes to exit. Feb 9 09:48:24.168221 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 09:48:24.169990 systemd-logind[1794]: Removed session 16. Feb 9 09:48:24.186230 systemd[1]: Started sshd@16-172.31.16.76:22-139.178.89.65:36556.service. Feb 9 09:48:24.361937 sshd[4612]: Accepted publickey for core from 139.178.89.65 port 36556 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:24.364458 sshd[4612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:24.374233 systemd[1]: Started session-17.scope. Feb 9 09:48:24.374880 systemd-logind[1794]: New session 17 of user core. Feb 9 09:48:24.672998 sshd[4612]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:24.678734 systemd-logind[1794]: Session 17 logged out. Waiting for processes to exit. Feb 9 09:48:24.679761 systemd[1]: sshd@16-172.31.16.76:22-139.178.89.65:36556.service: Deactivated successfully. Feb 9 09:48:24.682586 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 09:48:24.684236 systemd-logind[1794]: Removed session 17. Feb 9 09:48:24.700818 systemd[1]: Started sshd@17-172.31.16.76:22-139.178.89.65:36566.service. Feb 9 09:48:24.878559 sshd[4622]: Accepted publickey for core from 139.178.89.65 port 36566 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:24.882067 sshd[4622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:24.891953 systemd[1]: Started session-18.scope. Feb 9 09:48:24.893832 systemd-logind[1794]: New session 18 of user core. Feb 9 09:48:26.310991 sshd[4622]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:26.317444 systemd-logind[1794]: Session 18 logged out. Waiting for processes to exit. Feb 9 09:48:26.319136 systemd[1]: sshd@17-172.31.16.76:22-139.178.89.65:36566.service: Deactivated successfully. Feb 9 09:48:26.320739 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 09:48:26.324270 systemd-logind[1794]: Removed session 18. Feb 9 09:48:26.338144 systemd[1]: Started sshd@18-172.31.16.76:22-139.178.89.65:36572.service. Feb 9 09:48:26.528947 sshd[4644]: Accepted publickey for core from 139.178.89.65 port 36572 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:26.530833 sshd[4644]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:26.540706 systemd-logind[1794]: New session 19 of user core. Feb 9 09:48:26.542299 systemd[1]: Started session-19.scope. Feb 9 09:48:27.014109 sshd[4644]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:27.019234 systemd[1]: sshd@18-172.31.16.76:22-139.178.89.65:36572.service: Deactivated successfully. Feb 9 09:48:27.021737 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 09:48:27.021763 systemd-logind[1794]: Session 19 logged out. Waiting for processes to exit. Feb 9 09:48:27.024375 systemd-logind[1794]: Removed session 19. Feb 9 09:48:27.038164 systemd[1]: Started sshd@19-172.31.16.76:22-139.178.89.65:36574.service. Feb 9 09:48:27.212988 sshd[4700]: Accepted publickey for core from 139.178.89.65 port 36574 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:27.216266 sshd[4700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:27.226456 systemd-logind[1794]: New session 20 of user core. Feb 9 09:48:27.226733 systemd[1]: Started session-20.scope. Feb 9 09:48:27.490496 sshd[4700]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:27.495644 systemd-logind[1794]: Session 20 logged out. Waiting for processes to exit. Feb 9 09:48:27.497364 systemd[1]: sshd@19-172.31.16.76:22-139.178.89.65:36574.service: Deactivated successfully. Feb 9 09:48:27.499925 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 09:48:27.502804 systemd-logind[1794]: Removed session 20. Feb 9 09:48:32.516772 systemd[1]: Started sshd@20-172.31.16.76:22-139.178.89.65:54160.service. Feb 9 09:48:32.691277 sshd[4713]: Accepted publickey for core from 139.178.89.65 port 54160 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:32.694718 sshd[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:32.701704 systemd-logind[1794]: New session 21 of user core. Feb 9 09:48:32.703775 systemd[1]: Started session-21.scope. Feb 9 09:48:32.939521 sshd[4713]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:32.945573 systemd[1]: sshd@20-172.31.16.76:22-139.178.89.65:54160.service: Deactivated successfully. Feb 9 09:48:32.947783 systemd-logind[1794]: Session 21 logged out. Waiting for processes to exit. Feb 9 09:48:32.948847 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 09:48:32.951491 systemd-logind[1794]: Removed session 21. Feb 9 09:48:37.967183 systemd[1]: Started sshd@21-172.31.16.76:22-139.178.89.65:54172.service. Feb 9 09:48:38.143745 sshd[4755]: Accepted publickey for core from 139.178.89.65 port 54172 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:38.145965 sshd[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:38.154297 systemd-logind[1794]: New session 22 of user core. Feb 9 09:48:38.155278 systemd[1]: Started session-22.scope. Feb 9 09:48:38.399842 sshd[4755]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:38.405051 systemd-logind[1794]: Session 22 logged out. Waiting for processes to exit. Feb 9 09:48:38.405681 systemd[1]: sshd@21-172.31.16.76:22-139.178.89.65:54172.service: Deactivated successfully. Feb 9 09:48:38.407266 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 09:48:38.409359 systemd-logind[1794]: Removed session 22. Feb 9 09:48:43.425129 systemd[1]: Started sshd@22-172.31.16.76:22-139.178.89.65:37830.service. Feb 9 09:48:43.595032 sshd[4768]: Accepted publickey for core from 139.178.89.65 port 37830 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:43.598320 sshd[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:43.605718 systemd-logind[1794]: New session 23 of user core. Feb 9 09:48:43.607229 systemd[1]: Started session-23.scope. Feb 9 09:48:43.860502 sshd[4768]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:43.868271 systemd[1]: sshd@22-172.31.16.76:22-139.178.89.65:37830.service: Deactivated successfully. Feb 9 09:48:43.870545 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 09:48:43.870938 systemd-logind[1794]: Session 23 logged out. Waiting for processes to exit. Feb 9 09:48:43.877265 systemd-logind[1794]: Removed session 23. Feb 9 09:48:48.887169 systemd[1]: Started sshd@23-172.31.16.76:22-139.178.89.65:60352.service. Feb 9 09:48:49.056138 sshd[4783]: Accepted publickey for core from 139.178.89.65 port 60352 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:49.058793 sshd[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:49.067955 systemd-logind[1794]: New session 24 of user core. Feb 9 09:48:49.068196 systemd[1]: Started session-24.scope. Feb 9 09:48:49.315038 sshd[4783]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:49.320155 systemd-logind[1794]: Session 24 logged out. Waiting for processes to exit. Feb 9 09:48:49.320801 systemd[1]: sshd@23-172.31.16.76:22-139.178.89.65:60352.service: Deactivated successfully. Feb 9 09:48:49.322829 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 09:48:49.324110 systemd-logind[1794]: Removed session 24. Feb 9 09:48:49.341944 systemd[1]: Started sshd@24-172.31.16.76:22-139.178.89.65:60362.service. Feb 9 09:48:49.513909 sshd[4796]: Accepted publickey for core from 139.178.89.65 port 60362 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:49.516465 sshd[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:49.525975 systemd[1]: Started session-25.scope. Feb 9 09:48:49.525977 systemd-logind[1794]: New session 25 of user core. Feb 9 09:48:52.150277 env[1817]: time="2024-02-09T09:48:52.150206694Z" level=info msg="StopContainer for \"6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4\" with timeout 30 (s)" Feb 9 09:48:52.154313 env[1817]: time="2024-02-09T09:48:52.154242714Z" level=info msg="Stop container \"6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4\" with signal terminated" Feb 9 09:48:52.192547 systemd[1]: run-containerd-runc-k8s.io-3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9-runc.pJSqJz.mount: Deactivated successfully. Feb 9 09:48:52.222028 env[1817]: time="2024-02-09T09:48:52.221467280Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:48:52.232120 env[1817]: time="2024-02-09T09:48:52.232044441Z" level=info msg="StopContainer for \"3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9\" with timeout 1 (s)" Feb 9 09:48:52.234144 env[1817]: time="2024-02-09T09:48:52.234073521Z" level=info msg="Stop container \"3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9\" with signal terminated" Feb 9 09:48:52.256232 systemd-networkd[1594]: lxc_health: Link DOWN Feb 9 09:48:52.256252 systemd-networkd[1594]: lxc_health: Lost carrier Feb 9 09:48:52.278533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4-rootfs.mount: Deactivated successfully. Feb 9 09:48:52.304507 env[1817]: time="2024-02-09T09:48:52.304434911Z" level=info msg="shim disconnected" id=6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4 Feb 9 09:48:52.304876 env[1817]: time="2024-02-09T09:48:52.304839107Z" level=warning msg="cleaning up after shim disconnected" id=6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4 namespace=k8s.io Feb 9 09:48:52.305043 env[1817]: time="2024-02-09T09:48:52.305014763Z" level=info msg="cleaning up dead shim" Feb 9 09:48:52.327199 env[1817]: time="2024-02-09T09:48:52.327143027Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4863 runtime=io.containerd.runc.v2\n" Feb 9 09:48:52.330655 env[1817]: time="2024-02-09T09:48:52.330558755Z" level=info msg="StopContainer for \"6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4\" returns successfully" Feb 9 09:48:52.331875 env[1817]: time="2024-02-09T09:48:52.331812527Z" level=info msg="StopPodSandbox for \"564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f\"" Feb 9 09:48:52.332043 env[1817]: time="2024-02-09T09:48:52.331917407Z" level=info msg="Container to stop \"6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:52.342416 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f-shm.mount: Deactivated successfully. Feb 9 09:48:52.353390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9-rootfs.mount: Deactivated successfully. Feb 9 09:48:52.363844 env[1817]: time="2024-02-09T09:48:52.363768852Z" level=info msg="shim disconnected" id=3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9 Feb 9 09:48:52.363844 env[1817]: time="2024-02-09T09:48:52.363838788Z" level=warning msg="cleaning up after shim disconnected" id=3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9 namespace=k8s.io Feb 9 09:48:52.364205 env[1817]: time="2024-02-09T09:48:52.363861600Z" level=info msg="cleaning up dead shim" Feb 9 09:48:52.395192 env[1817]: time="2024-02-09T09:48:52.395113285Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4887 runtime=io.containerd.runc.v2\n" Feb 9 09:48:52.398257 env[1817]: time="2024-02-09T09:48:52.398184901Z" level=info msg="StopContainer for \"3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9\" returns successfully" Feb 9 09:48:52.399313 env[1817]: time="2024-02-09T09:48:52.399244405Z" level=info msg="StopPodSandbox for \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\"" Feb 9 09:48:52.399467 env[1817]: time="2024-02-09T09:48:52.399343285Z" level=info msg="Container to stop \"de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:52.399467 env[1817]: time="2024-02-09T09:48:52.399378349Z" level=info msg="Container to stop \"09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:52.399467 env[1817]: time="2024-02-09T09:48:52.399406573Z" level=info msg="Container to stop \"3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:52.399467 env[1817]: time="2024-02-09T09:48:52.399437173Z" level=info msg="Container to stop \"7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:52.399892 env[1817]: time="2024-02-09T09:48:52.399463381Z" level=info msg="Container to stop \"8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:52.419463 env[1817]: time="2024-02-09T09:48:52.419276150Z" level=info msg="shim disconnected" id=564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f Feb 9 09:48:52.419463 env[1817]: time="2024-02-09T09:48:52.419355998Z" level=warning msg="cleaning up after shim disconnected" id=564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f namespace=k8s.io Feb 9 09:48:52.419463 env[1817]: time="2024-02-09T09:48:52.419379530Z" level=info msg="cleaning up dead shim" Feb 9 09:48:52.453169 env[1817]: time="2024-02-09T09:48:52.453101667Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4921 runtime=io.containerd.runc.v2\n" Feb 9 09:48:52.453983 env[1817]: time="2024-02-09T09:48:52.453924735Z" level=info msg="TearDown network for sandbox \"564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f\" successfully" Feb 9 09:48:52.454246 env[1817]: time="2024-02-09T09:48:52.453986067Z" level=info msg="StopPodSandbox for \"564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f\" returns successfully" Feb 9 09:48:52.489012 env[1817]: time="2024-02-09T09:48:52.488934316Z" level=info msg="shim disconnected" id=156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f Feb 9 09:48:52.489771 env[1817]: time="2024-02-09T09:48:52.489706228Z" level=warning msg="cleaning up after shim disconnected" id=156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f namespace=k8s.io Feb 9 09:48:52.490023 env[1817]: time="2024-02-09T09:48:52.489979600Z" level=info msg="cleaning up dead shim" Feb 9 09:48:52.508765 env[1817]: time="2024-02-09T09:48:52.508694848Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4947 runtime=io.containerd.runc.v2\n" Feb 9 09:48:52.509768 env[1817]: time="2024-02-09T09:48:52.509707108Z" level=info msg="TearDown network for sandbox \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" successfully" Feb 9 09:48:52.509999 env[1817]: time="2024-02-09T09:48:52.509958724Z" level=info msg="StopPodSandbox for \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" returns successfully" Feb 9 09:48:52.541814 kubelet[3060]: I0209 09:48:52.541677 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7xsd\" (UniqueName: \"kubernetes.io/projected/67648f94-8d6b-4f6c-b67a-2a1034407668-kube-api-access-b7xsd\") pod \"67648f94-8d6b-4f6c-b67a-2a1034407668\" (UID: \"67648f94-8d6b-4f6c-b67a-2a1034407668\") " Feb 9 09:48:52.541814 kubelet[3060]: I0209 09:48:52.541772 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67648f94-8d6b-4f6c-b67a-2a1034407668-cilium-config-path\") pod \"67648f94-8d6b-4f6c-b67a-2a1034407668\" (UID: \"67648f94-8d6b-4f6c-b67a-2a1034407668\") " Feb 9 09:48:52.543183 kubelet[3060]: W0209 09:48:52.543019 3060 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/67648f94-8d6b-4f6c-b67a-2a1034407668/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:48:52.548542 kubelet[3060]: I0209 09:48:52.548464 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67648f94-8d6b-4f6c-b67a-2a1034407668-kube-api-access-b7xsd" (OuterVolumeSpecName: "kube-api-access-b7xsd") pod "67648f94-8d6b-4f6c-b67a-2a1034407668" (UID: "67648f94-8d6b-4f6c-b67a-2a1034407668"). InnerVolumeSpecName "kube-api-access-b7xsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:52.550812 kubelet[3060]: I0209 09:48:52.550760 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67648f94-8d6b-4f6c-b67a-2a1034407668-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "67648f94-8d6b-4f6c-b67a-2a1034407668" (UID: "67648f94-8d6b-4f6c-b67a-2a1034407668"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:48:52.642819 kubelet[3060]: I0209 09:48:52.642777 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-xtables-lock\") pod \"250ab6e2-51be-4971-8f35-d678ee2fcd86\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " Feb 9 09:48:52.643082 kubelet[3060]: I0209 09:48:52.643058 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-host-proc-sys-kernel\") pod \"250ab6e2-51be-4971-8f35-d678ee2fcd86\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " Feb 9 09:48:52.643267 kubelet[3060]: I0209 09:48:52.643245 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/250ab6e2-51be-4971-8f35-d678ee2fcd86-cilium-config-path\") pod \"250ab6e2-51be-4971-8f35-d678ee2fcd86\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " Feb 9 09:48:52.643427 kubelet[3060]: I0209 09:48:52.643404 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nblfw\" (UniqueName: \"kubernetes.io/projected/250ab6e2-51be-4971-8f35-d678ee2fcd86-kube-api-access-nblfw\") pod \"250ab6e2-51be-4971-8f35-d678ee2fcd86\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " Feb 9 09:48:52.643577 kubelet[3060]: I0209 09:48:52.643555 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-hostproc\") pod \"250ab6e2-51be-4971-8f35-d678ee2fcd86\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " Feb 9 09:48:52.643781 kubelet[3060]: I0209 09:48:52.643759 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/250ab6e2-51be-4971-8f35-d678ee2fcd86-hubble-tls\") pod \"250ab6e2-51be-4971-8f35-d678ee2fcd86\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " Feb 9 09:48:52.643938 kubelet[3060]: I0209 09:48:52.643917 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-etc-cni-netd\") pod \"250ab6e2-51be-4971-8f35-d678ee2fcd86\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " Feb 9 09:48:52.644086 kubelet[3060]: I0209 09:48:52.644064 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-lib-modules\") pod \"250ab6e2-51be-4971-8f35-d678ee2fcd86\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " Feb 9 09:48:52.644256 kubelet[3060]: I0209 09:48:52.644230 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/250ab6e2-51be-4971-8f35-d678ee2fcd86-clustermesh-secrets\") pod \"250ab6e2-51be-4971-8f35-d678ee2fcd86\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " Feb 9 09:48:52.644450 kubelet[3060]: I0209 09:48:52.644424 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-cilium-run\") pod \"250ab6e2-51be-4971-8f35-d678ee2fcd86\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " Feb 9 09:48:52.644649 kubelet[3060]: I0209 09:48:52.644625 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-host-proc-sys-net\") pod \"250ab6e2-51be-4971-8f35-d678ee2fcd86\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " Feb 9 09:48:52.644818 kubelet[3060]: I0209 09:48:52.644796 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-bpf-maps\") pod \"250ab6e2-51be-4971-8f35-d678ee2fcd86\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " Feb 9 09:48:52.644982 kubelet[3060]: I0209 09:48:52.644961 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-cilium-cgroup\") pod \"250ab6e2-51be-4971-8f35-d678ee2fcd86\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " Feb 9 09:48:52.645121 kubelet[3060]: I0209 09:48:52.645099 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-cni-path\") pod \"250ab6e2-51be-4971-8f35-d678ee2fcd86\" (UID: \"250ab6e2-51be-4971-8f35-d678ee2fcd86\") " Feb 9 09:48:52.645309 kubelet[3060]: I0209 09:48:52.645285 3060 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-b7xsd\" (UniqueName: \"kubernetes.io/projected/67648f94-8d6b-4f6c-b67a-2a1034407668-kube-api-access-b7xsd\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:52.645460 kubelet[3060]: I0209 09:48:52.645437 3060 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67648f94-8d6b-4f6c-b67a-2a1034407668-cilium-config-path\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:52.645662 kubelet[3060]: I0209 09:48:52.645592 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-cni-path" (OuterVolumeSpecName: "cni-path") pod "250ab6e2-51be-4971-8f35-d678ee2fcd86" (UID: "250ab6e2-51be-4971-8f35-d678ee2fcd86"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:52.645871 kubelet[3060]: I0209 09:48:52.645838 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-hostproc" (OuterVolumeSpecName: "hostproc") pod "250ab6e2-51be-4971-8f35-d678ee2fcd86" (UID: "250ab6e2-51be-4971-8f35-d678ee2fcd86"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:52.646021 kubelet[3060]: I0209 09:48:52.642901 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "250ab6e2-51be-4971-8f35-d678ee2fcd86" (UID: "250ab6e2-51be-4971-8f35-d678ee2fcd86"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:52.646174 kubelet[3060]: W0209 09:48:52.643574 3060 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/250ab6e2-51be-4971-8f35-d678ee2fcd86/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:48:52.648828 kubelet[3060]: I0209 09:48:52.648761 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250ab6e2-51be-4971-8f35-d678ee2fcd86-kube-api-access-nblfw" (OuterVolumeSpecName: "kube-api-access-nblfw") pod "250ab6e2-51be-4971-8f35-d678ee2fcd86" (UID: "250ab6e2-51be-4971-8f35-d678ee2fcd86"). InnerVolumeSpecName "kube-api-access-nblfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:52.652431 kubelet[3060]: I0209 09:48:52.652366 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/250ab6e2-51be-4971-8f35-d678ee2fcd86-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "250ab6e2-51be-4971-8f35-d678ee2fcd86" (UID: "250ab6e2-51be-4971-8f35-d678ee2fcd86"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:48:52.652708 kubelet[3060]: I0209 09:48:52.643127 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "250ab6e2-51be-4971-8f35-d678ee2fcd86" (UID: "250ab6e2-51be-4971-8f35-d678ee2fcd86"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:52.652904 kubelet[3060]: I0209 09:48:52.652872 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "250ab6e2-51be-4971-8f35-d678ee2fcd86" (UID: "250ab6e2-51be-4971-8f35-d678ee2fcd86"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:52.653091 kubelet[3060]: I0209 09:48:52.653063 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "250ab6e2-51be-4971-8f35-d678ee2fcd86" (UID: "250ab6e2-51be-4971-8f35-d678ee2fcd86"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:52.653277 kubelet[3060]: I0209 09:48:52.653250 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "250ab6e2-51be-4971-8f35-d678ee2fcd86" (UID: "250ab6e2-51be-4971-8f35-d678ee2fcd86"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:52.653437 kubelet[3060]: I0209 09:48:52.653409 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "250ab6e2-51be-4971-8f35-d678ee2fcd86" (UID: "250ab6e2-51be-4971-8f35-d678ee2fcd86"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:52.653666 kubelet[3060]: I0209 09:48:52.653575 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "250ab6e2-51be-4971-8f35-d678ee2fcd86" (UID: "250ab6e2-51be-4971-8f35-d678ee2fcd86"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:52.654137 kubelet[3060]: I0209 09:48:52.654065 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/250ab6e2-51be-4971-8f35-d678ee2fcd86-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "250ab6e2-51be-4971-8f35-d678ee2fcd86" (UID: "250ab6e2-51be-4971-8f35-d678ee2fcd86"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:48:52.654269 kubelet[3060]: I0209 09:48:52.654164 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "250ab6e2-51be-4971-8f35-d678ee2fcd86" (UID: "250ab6e2-51be-4971-8f35-d678ee2fcd86"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:52.659289 kubelet[3060]: I0209 09:48:52.659239 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250ab6e2-51be-4971-8f35-d678ee2fcd86-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "250ab6e2-51be-4971-8f35-d678ee2fcd86" (UID: "250ab6e2-51be-4971-8f35-d678ee2fcd86"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:52.746530 kubelet[3060]: I0209 09:48:52.746462 3060 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-bpf-maps\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:52.746530 kubelet[3060]: I0209 09:48:52.746519 3060 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-cilium-cgroup\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:52.746828 kubelet[3060]: I0209 09:48:52.746546 3060 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-cni-path\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:52.746828 kubelet[3060]: I0209 09:48:52.746570 3060 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-xtables-lock\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:52.746828 kubelet[3060]: I0209 09:48:52.746596 3060 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-host-proc-sys-kernel\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:52.746828 kubelet[3060]: I0209 09:48:52.746661 3060 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-hostproc\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:52.746828 kubelet[3060]: I0209 09:48:52.746686 3060 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/250ab6e2-51be-4971-8f35-d678ee2fcd86-hubble-tls\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:52.746828 kubelet[3060]: I0209 09:48:52.746711 3060 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/250ab6e2-51be-4971-8f35-d678ee2fcd86-cilium-config-path\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:52.746828 kubelet[3060]: I0209 09:48:52.746735 3060 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-nblfw\" (UniqueName: \"kubernetes.io/projected/250ab6e2-51be-4971-8f35-d678ee2fcd86-kube-api-access-nblfw\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:52.746828 kubelet[3060]: I0209 09:48:52.746760 3060 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-etc-cni-netd\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:52.747298 kubelet[3060]: I0209 09:48:52.746788 3060 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-lib-modules\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:52.747298 kubelet[3060]: I0209 09:48:52.746815 3060 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/250ab6e2-51be-4971-8f35-d678ee2fcd86-clustermesh-secrets\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:52.747298 kubelet[3060]: I0209 09:48:52.746838 3060 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-cilium-run\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:52.747298 kubelet[3060]: I0209 09:48:52.746863 3060 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/250ab6e2-51be-4971-8f35-d678ee2fcd86-host-proc-sys-net\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:53.166269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f-rootfs.mount: Deactivated successfully. Feb 9 09:48:53.167664 systemd[1]: var-lib-kubelet-pods-67648f94\x2d8d6b\x2d4f6c\x2db67a\x2d2a1034407668-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db7xsd.mount: Deactivated successfully. Feb 9 09:48:53.167932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f-rootfs.mount: Deactivated successfully. Feb 9 09:48:53.168171 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f-shm.mount: Deactivated successfully. Feb 9 09:48:53.168405 systemd[1]: var-lib-kubelet-pods-250ab6e2\x2d51be\x2d4971\x2d8f35\x2dd678ee2fcd86-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnblfw.mount: Deactivated successfully. Feb 9 09:48:53.168676 systemd[1]: var-lib-kubelet-pods-250ab6e2\x2d51be\x2d4971\x2d8f35\x2dd678ee2fcd86-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:48:53.168943 systemd[1]: var-lib-kubelet-pods-250ab6e2\x2d51be\x2d4971\x2d8f35\x2dd678ee2fcd86-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:48:53.214177 kubelet[3060]: I0209 09:48:53.214130 3060 scope.go:115] "RemoveContainer" containerID="6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4" Feb 9 09:48:53.222955 env[1817]: time="2024-02-09T09:48:53.222906500Z" level=info msg="RemoveContainer for \"6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4\"" Feb 9 09:48:53.238593 env[1817]: time="2024-02-09T09:48:53.238496073Z" level=info msg="RemoveContainer for \"6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4\" returns successfully" Feb 9 09:48:53.242408 kubelet[3060]: I0209 09:48:53.241160 3060 scope.go:115] "RemoveContainer" containerID="6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4" Feb 9 09:48:53.242408 kubelet[3060]: E0209 09:48:53.242124 3060 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4\": not found" containerID="6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4" Feb 9 09:48:53.242408 kubelet[3060]: I0209 09:48:53.242209 3060 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4} err="failed to get container status \"6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4\": not found" Feb 9 09:48:53.242408 kubelet[3060]: I0209 09:48:53.242240 3060 scope.go:115] "RemoveContainer" containerID="3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9" Feb 9 09:48:53.242902 env[1817]: time="2024-02-09T09:48:53.241665945Z" level=error msg="ContainerStatus for \"6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b5f42ae72e3d1ee1db41cdc915bdfc06b568c7fa2a0dcbc63157a8dd89f02d4\": not found" Feb 9 09:48:53.245918 env[1817]: time="2024-02-09T09:48:53.245854054Z" level=info msg="RemoveContainer for \"3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9\"" Feb 9 09:48:53.261270 env[1817]: time="2024-02-09T09:48:53.257820023Z" level=info msg="RemoveContainer for \"3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9\" returns successfully" Feb 9 09:48:53.261474 kubelet[3060]: I0209 09:48:53.259057 3060 scope.go:115] "RemoveContainer" containerID="8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683" Feb 9 09:48:53.271330 env[1817]: time="2024-02-09T09:48:53.271268640Z" level=info msg="RemoveContainer for \"8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683\"" Feb 9 09:48:53.285522 env[1817]: time="2024-02-09T09:48:53.285441853Z" level=info msg="RemoveContainer for \"8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683\" returns successfully" Feb 9 09:48:53.286030 kubelet[3060]: I0209 09:48:53.285819 3060 scope.go:115] "RemoveContainer" containerID="7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847" Feb 9 09:48:53.289662 env[1817]: time="2024-02-09T09:48:53.288990493Z" level=info msg="RemoveContainer for \"7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847\"" Feb 9 09:48:53.325242 env[1817]: time="2024-02-09T09:48:53.325151812Z" level=info msg="RemoveContainer for \"7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847\" returns successfully" Feb 9 09:48:53.327034 kubelet[3060]: I0209 09:48:53.326981 3060 scope.go:115] "RemoveContainer" containerID="09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f" Feb 9 09:48:53.329314 env[1817]: time="2024-02-09T09:48:53.329035648Z" level=info msg="RemoveContainer for \"09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f\"" Feb 9 09:48:53.338407 env[1817]: time="2024-02-09T09:48:53.338307677Z" level=info msg="RemoveContainer for \"09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f\" returns successfully" Feb 9 09:48:53.338752 kubelet[3060]: I0209 09:48:53.338686 3060 scope.go:115] "RemoveContainer" containerID="de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d" Feb 9 09:48:53.341110 env[1817]: time="2024-02-09T09:48:53.341035865Z" level=info msg="RemoveContainer for \"de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d\"" Feb 9 09:48:53.346635 env[1817]: time="2024-02-09T09:48:53.346498266Z" level=info msg="RemoveContainer for \"de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d\" returns successfully" Feb 9 09:48:53.346944 kubelet[3060]: I0209 09:48:53.346896 3060 scope.go:115] "RemoveContainer" containerID="3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9" Feb 9 09:48:53.347521 env[1817]: time="2024-02-09T09:48:53.347414694Z" level=error msg="ContainerStatus for \"3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9\": not found" Feb 9 09:48:53.348107 kubelet[3060]: E0209 09:48:53.348044 3060 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9\": not found" containerID="3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9" Feb 9 09:48:53.348291 kubelet[3060]: I0209 09:48:53.348118 3060 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9} err="failed to get container status \"3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b73c5bf8c09b1090eb27e1ae7b01100d72ef6dab20734f7f14f4c6cf5994eb9\": not found" Feb 9 09:48:53.348291 kubelet[3060]: I0209 09:48:53.348147 3060 scope.go:115] "RemoveContainer" containerID="8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683" Feb 9 09:48:53.349172 env[1817]: time="2024-02-09T09:48:53.349031970Z" level=error msg="ContainerStatus for \"8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683\": not found" Feb 9 09:48:53.349888 kubelet[3060]: E0209 09:48:53.349834 3060 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683\": not found" containerID="8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683" Feb 9 09:48:53.350102 kubelet[3060]: I0209 09:48:53.349942 3060 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683} err="failed to get container status \"8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d8ee85a3836ca8d1959f5c95eb91d7ee82e8abdbbda02cf79bb4fe2cf170683\": not found" Feb 9 09:48:53.350102 kubelet[3060]: I0209 09:48:53.349970 3060 scope.go:115] "RemoveContainer" containerID="7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847" Feb 9 09:48:53.350838 env[1817]: time="2024-02-09T09:48:53.350575542Z" level=error msg="ContainerStatus for \"7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847\": not found" Feb 9 09:48:53.351842 kubelet[3060]: E0209 09:48:53.351431 3060 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847\": not found" containerID="7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847" Feb 9 09:48:53.351842 kubelet[3060]: I0209 09:48:53.351520 3060 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847} err="failed to get container status \"7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d9a9be4d898f1f83b0e0bf2858459a31e10e6ecfa7513a43dc235c5939d4847\": not found" Feb 9 09:48:53.351842 kubelet[3060]: I0209 09:48:53.351549 3060 scope.go:115] "RemoveContainer" containerID="09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f" Feb 9 09:48:53.352568 env[1817]: time="2024-02-09T09:48:53.352466886Z" level=error msg="ContainerStatus for \"09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f\": not found" Feb 9 09:48:53.353133 kubelet[3060]: E0209 09:48:53.353095 3060 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f\": not found" containerID="09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f" Feb 9 09:48:53.353263 kubelet[3060]: I0209 09:48:53.353160 3060 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f} err="failed to get container status \"09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f\": rpc error: code = NotFound desc = an error occurred when try to find container \"09a7add569f1e4db120eca336c9bf103d278050e2cf0543645f8ed608034160f\": not found" Feb 9 09:48:53.353263 kubelet[3060]: I0209 09:48:53.353186 3060 scope.go:115] "RemoveContainer" containerID="de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d" Feb 9 09:48:53.353733 env[1817]: time="2024-02-09T09:48:53.353551722Z" level=error msg="ContainerStatus for \"de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d\": not found" Feb 9 09:48:53.354147 kubelet[3060]: E0209 09:48:53.354101 3060 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d\": not found" containerID="de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d" Feb 9 09:48:53.354275 kubelet[3060]: I0209 09:48:53.354165 3060 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d} err="failed to get container status \"de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d\": rpc error: code = NotFound desc = an error occurred when try to find container \"de018a57c4fc0e74f2ec4f520c636df9e6dbc0a78d7741c51c2fe33e548faa2d\": not found" Feb 9 09:48:53.751186 kubelet[3060]: I0209 09:48:53.751125 3060 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=250ab6e2-51be-4971-8f35-d678ee2fcd86 path="/var/lib/kubelet/pods/250ab6e2-51be-4971-8f35-d678ee2fcd86/volumes" Feb 9 09:48:53.754420 kubelet[3060]: I0209 09:48:53.754359 3060 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=67648f94-8d6b-4f6c-b67a-2a1034407668 path="/var/lib/kubelet/pods/67648f94-8d6b-4f6c-b67a-2a1034407668/volumes" Feb 9 09:48:53.775300 kubelet[3060]: E0209 09:48:53.775259 3060 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:48:54.068991 sshd[4796]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:54.074767 systemd-logind[1794]: Session 25 logged out. Waiting for processes to exit. Feb 9 09:48:54.075919 systemd[1]: sshd@24-172.31.16.76:22-139.178.89.65:60362.service: Deactivated successfully. Feb 9 09:48:54.079241 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 09:48:54.082091 systemd-logind[1794]: Removed session 25. Feb 9 09:48:54.097179 systemd[1]: Started sshd@25-172.31.16.76:22-139.178.89.65:60374.service. Feb 9 09:48:54.268644 sshd[4967]: Accepted publickey for core from 139.178.89.65 port 60374 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:54.271113 sshd[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:54.279159 systemd-logind[1794]: New session 26 of user core. Feb 9 09:48:54.280403 systemd[1]: Started session-26.scope. Feb 9 09:48:55.480849 kubelet[3060]: I0209 09:48:55.480777 3060 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:55.481534 kubelet[3060]: E0209 09:48:55.480874 3060 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="250ab6e2-51be-4971-8f35-d678ee2fcd86" containerName="clean-cilium-state" Feb 9 09:48:55.481534 kubelet[3060]: E0209 09:48:55.480900 3060 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="250ab6e2-51be-4971-8f35-d678ee2fcd86" containerName="cilium-agent" Feb 9 09:48:55.481534 kubelet[3060]: E0209 09:48:55.480934 3060 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="250ab6e2-51be-4971-8f35-d678ee2fcd86" containerName="mount-bpf-fs" Feb 9 09:48:55.481534 kubelet[3060]: E0209 09:48:55.480956 3060 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="250ab6e2-51be-4971-8f35-d678ee2fcd86" containerName="mount-cgroup" Feb 9 09:48:55.481534 kubelet[3060]: E0209 09:48:55.480974 3060 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="250ab6e2-51be-4971-8f35-d678ee2fcd86" containerName="apply-sysctl-overwrites" Feb 9 09:48:55.481534 kubelet[3060]: E0209 09:48:55.480991 3060 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="67648f94-8d6b-4f6c-b67a-2a1034407668" containerName="cilium-operator" Feb 9 09:48:55.481534 kubelet[3060]: I0209 09:48:55.481043 3060 memory_manager.go:346] "RemoveStaleState removing state" podUID="250ab6e2-51be-4971-8f35-d678ee2fcd86" containerName="cilium-agent" Feb 9 09:48:55.481534 kubelet[3060]: I0209 09:48:55.481061 3060 memory_manager.go:346] "RemoveStaleState removing state" podUID="67648f94-8d6b-4f6c-b67a-2a1034407668" containerName="cilium-operator" Feb 9 09:48:55.489930 sshd[4967]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:55.506622 systemd[1]: sshd@25-172.31.16.76:22-139.178.89.65:60374.service: Deactivated successfully. Feb 9 09:48:55.523166 systemd[1]: Started sshd@26-172.31.16.76:22-139.178.89.65:60384.service. Feb 9 09:48:55.526260 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 09:48:55.535333 systemd-logind[1794]: Session 26 logged out. Waiting for processes to exit. Feb 9 09:48:55.543924 systemd-logind[1794]: Removed session 26. Feb 9 09:48:55.568139 kubelet[3060]: I0209 09:48:55.566919 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-bpf-maps\") pod \"cilium-6gmdr\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " pod="kube-system/cilium-6gmdr" Feb 9 09:48:55.568139 kubelet[3060]: I0209 09:48:55.567053 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-host-proc-sys-net\") pod \"cilium-6gmdr\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " pod="kube-system/cilium-6gmdr" Feb 9 09:48:55.568139 kubelet[3060]: I0209 09:48:55.567106 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7d6de51-9b66-422e-a60c-684a9b6568a6-hubble-tls\") pod \"cilium-6gmdr\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " pod="kube-system/cilium-6gmdr" Feb 9 09:48:55.568139 kubelet[3060]: I0209 09:48:55.567216 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-cni-path\") pod \"cilium-6gmdr\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " pod="kube-system/cilium-6gmdr" Feb 9 09:48:55.568139 kubelet[3060]: I0209 09:48:55.567334 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx2wp\" (UniqueName: \"kubernetes.io/projected/c7d6de51-9b66-422e-a60c-684a9b6568a6-kube-api-access-cx2wp\") pod \"cilium-6gmdr\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " pod="kube-system/cilium-6gmdr" Feb 9 09:48:55.568139 kubelet[3060]: I0209 09:48:55.567387 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-hostproc\") pod \"cilium-6gmdr\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " pod="kube-system/cilium-6gmdr" Feb 9 09:48:55.568716 kubelet[3060]: I0209 09:48:55.567431 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-xtables-lock\") pod \"cilium-6gmdr\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " pod="kube-system/cilium-6gmdr" Feb 9 09:48:55.568716 kubelet[3060]: I0209 09:48:55.567473 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-lib-modules\") pod \"cilium-6gmdr\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " pod="kube-system/cilium-6gmdr" Feb 9 09:48:55.568716 kubelet[3060]: I0209 09:48:55.567521 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-run\") pod \"cilium-6gmdr\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " pod="kube-system/cilium-6gmdr" Feb 9 09:48:55.568716 kubelet[3060]: I0209 09:48:55.567564 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-etc-cni-netd\") pod \"cilium-6gmdr\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " pod="kube-system/cilium-6gmdr" Feb 9 09:48:55.568716 kubelet[3060]: I0209 09:48:55.567630 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-host-proc-sys-kernel\") pod \"cilium-6gmdr\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " pod="kube-system/cilium-6gmdr" Feb 9 09:48:55.568716 kubelet[3060]: I0209 09:48:55.567677 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-cgroup\") pod \"cilium-6gmdr\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " pod="kube-system/cilium-6gmdr" Feb 9 09:48:55.569153 kubelet[3060]: I0209 09:48:55.567724 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7d6de51-9b66-422e-a60c-684a9b6568a6-clustermesh-secrets\") pod \"cilium-6gmdr\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " pod="kube-system/cilium-6gmdr" Feb 9 09:48:55.569153 kubelet[3060]: I0209 09:48:55.567768 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-config-path\") pod \"cilium-6gmdr\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " pod="kube-system/cilium-6gmdr" Feb 9 09:48:55.569153 kubelet[3060]: I0209 09:48:55.567814 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-ipsec-secrets\") pod \"cilium-6gmdr\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " pod="kube-system/cilium-6gmdr" Feb 9 09:48:55.749147 sshd[4978]: Accepted publickey for core from 139.178.89.65 port 60384 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:55.752403 sshd[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:55.764082 systemd[1]: Started session-27.scope. Feb 9 09:48:55.765050 systemd-logind[1794]: New session 27 of user core. Feb 9 09:48:55.792419 env[1817]: time="2024-02-09T09:48:55.791922471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6gmdr,Uid:c7d6de51-9b66-422e-a60c-684a9b6568a6,Namespace:kube-system,Attempt:0,}" Feb 9 09:48:55.830067 env[1817]: time="2024-02-09T09:48:55.829945414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:55.830067 env[1817]: time="2024-02-09T09:48:55.830022070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:55.830405 env[1817]: time="2024-02-09T09:48:55.830327086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:55.830833 env[1817]: time="2024-02-09T09:48:55.830753098Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3 pid=4996 runtime=io.containerd.runc.v2 Feb 9 09:48:55.916256 env[1817]: time="2024-02-09T09:48:55.916186465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6gmdr,Uid:c7d6de51-9b66-422e-a60c-684a9b6568a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3\"" Feb 9 09:48:55.923988 env[1817]: time="2024-02-09T09:48:55.923851814Z" level=info msg="CreateContainer within sandbox \"1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:48:55.945981 env[1817]: time="2024-02-09T09:48:55.945898374Z" level=info msg="CreateContainer within sandbox \"1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184\"" Feb 9 09:48:55.948901 env[1817]: time="2024-02-09T09:48:55.947091678Z" level=info msg="StartContainer for \"a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184\"" Feb 9 09:48:56.089344 env[1817]: time="2024-02-09T09:48:56.089164879Z" level=info msg="StartContainer for \"a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184\" returns successfully" Feb 9 09:48:56.111058 sshd[4978]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:56.117336 systemd[1]: sshd@26-172.31.16.76:22-139.178.89.65:60384.service: Deactivated successfully. Feb 9 09:48:56.118933 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 09:48:56.122165 systemd-logind[1794]: Session 27 logged out. Waiting for processes to exit. Feb 9 09:48:56.127050 systemd-logind[1794]: Removed session 27. Feb 9 09:48:56.137102 systemd[1]: Started sshd@27-172.31.16.76:22-139.178.89.65:60398.service. Feb 9 09:48:56.230114 env[1817]: time="2024-02-09T09:48:56.230028002Z" level=info msg="shim disconnected" id=a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184 Feb 9 09:48:56.230777 env[1817]: time="2024-02-09T09:48:56.230720438Z" level=warning msg="cleaning up after shim disconnected" id=a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184 namespace=k8s.io Feb 9 09:48:56.230979 env[1817]: time="2024-02-09T09:48:56.230945103Z" level=info msg="cleaning up dead shim" Feb 9 09:48:56.245872 env[1817]: time="2024-02-09T09:48:56.244732242Z" level=info msg="StopContainer for \"a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184\" with timeout 1 (s)" Feb 9 09:48:56.245872 env[1817]: time="2024-02-09T09:48:56.245058486Z" level=info msg="StopContainer for \"a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184\" returns successfully" Feb 9 09:48:56.247849 env[1817]: time="2024-02-09T09:48:56.246446946Z" level=info msg="StopPodSandbox for \"1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3\"" Feb 9 09:48:56.298719 env[1817]: time="2024-02-09T09:48:56.298515293Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5090 runtime=io.containerd.runc.v2\n" Feb 9 09:48:56.333921 env[1817]: time="2024-02-09T09:48:56.333843817Z" level=info msg="shim disconnected" id=1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3 Feb 9 09:48:56.333921 env[1817]: time="2024-02-09T09:48:56.333917641Z" level=warning msg="cleaning up after shim disconnected" id=1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3 namespace=k8s.io Feb 9 09:48:56.334254 env[1817]: time="2024-02-09T09:48:56.333940909Z" level=info msg="cleaning up dead shim" Feb 9 09:48:56.350015 env[1817]: time="2024-02-09T09:48:56.349857713Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5121 runtime=io.containerd.runc.v2\n" Feb 9 09:48:56.351342 env[1817]: time="2024-02-09T09:48:56.351287549Z" level=info msg="TearDown network for sandbox \"1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3\" successfully" Feb 9 09:48:56.351492 env[1817]: time="2024-02-09T09:48:56.351340385Z" level=info msg="StopPodSandbox for \"1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3\" returns successfully" Feb 9 09:48:56.354714 sshd[5074]: Accepted publickey for core from 139.178.89.65 port 60398 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:56.357914 sshd[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:56.371268 systemd[1]: Started session-28.scope. Feb 9 09:48:56.371732 systemd-logind[1794]: New session 28 of user core. Feb 9 09:48:56.488589 kubelet[3060]: I0209 09:48:56.488518 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7d6de51-9b66-422e-a60c-684a9b6568a6-hubble-tls\") pod \"c7d6de51-9b66-422e-a60c-684a9b6568a6\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " Feb 9 09:48:56.489294 kubelet[3060]: I0209 09:48:56.488644 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-xtables-lock\") pod \"c7d6de51-9b66-422e-a60c-684a9b6568a6\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " Feb 9 09:48:56.489294 kubelet[3060]: I0209 09:48:56.488690 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-etc-cni-netd\") pod \"c7d6de51-9b66-422e-a60c-684a9b6568a6\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " Feb 9 09:48:56.489294 kubelet[3060]: I0209 09:48:56.488756 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-cgroup\") pod \"c7d6de51-9b66-422e-a60c-684a9b6568a6\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " Feb 9 09:48:56.489294 kubelet[3060]: I0209 09:48:56.488829 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-ipsec-secrets\") pod \"c7d6de51-9b66-422e-a60c-684a9b6568a6\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " Feb 9 09:48:56.489294 kubelet[3060]: I0209 09:48:56.488895 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-lib-modules\") pod \"c7d6de51-9b66-422e-a60c-684a9b6568a6\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " Feb 9 09:48:56.489294 kubelet[3060]: I0209 09:48:56.489014 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-host-proc-sys-kernel\") pod \"c7d6de51-9b66-422e-a60c-684a9b6568a6\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " Feb 9 09:48:56.489727 kubelet[3060]: I0209 09:48:56.489084 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-bpf-maps\") pod \"c7d6de51-9b66-422e-a60c-684a9b6568a6\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " Feb 9 09:48:56.489727 kubelet[3060]: I0209 09:48:56.489127 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-cni-path\") pod \"c7d6de51-9b66-422e-a60c-684a9b6568a6\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " Feb 9 09:48:56.490006 kubelet[3060]: I0209 09:48:56.489921 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx2wp\" (UniqueName: \"kubernetes.io/projected/c7d6de51-9b66-422e-a60c-684a9b6568a6-kube-api-access-cx2wp\") pod \"c7d6de51-9b66-422e-a60c-684a9b6568a6\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " Feb 9 09:48:56.490006 kubelet[3060]: I0209 09:48:56.489978 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-hostproc\") pod \"c7d6de51-9b66-422e-a60c-684a9b6568a6\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " Feb 9 09:48:56.490229 kubelet[3060]: I0209 09:48:56.490023 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-host-proc-sys-net\") pod \"c7d6de51-9b66-422e-a60c-684a9b6568a6\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " Feb 9 09:48:56.490229 kubelet[3060]: I0209 09:48:56.490065 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-run\") pod \"c7d6de51-9b66-422e-a60c-684a9b6568a6\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " Feb 9 09:48:56.490229 kubelet[3060]: I0209 09:48:56.490114 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7d6de51-9b66-422e-a60c-684a9b6568a6-clustermesh-secrets\") pod \"c7d6de51-9b66-422e-a60c-684a9b6568a6\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " Feb 9 09:48:56.490229 kubelet[3060]: I0209 09:48:56.490161 3060 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-config-path\") pod \"c7d6de51-9b66-422e-a60c-684a9b6568a6\" (UID: \"c7d6de51-9b66-422e-a60c-684a9b6568a6\") " Feb 9 09:48:56.490562 kubelet[3060]: I0209 09:48:56.489690 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c7d6de51-9b66-422e-a60c-684a9b6568a6" (UID: "c7d6de51-9b66-422e-a60c-684a9b6568a6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:56.490562 kubelet[3060]: I0209 09:48:56.489535 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-cni-path" (OuterVolumeSpecName: "cni-path") pod "c7d6de51-9b66-422e-a60c-684a9b6568a6" (UID: "c7d6de51-9b66-422e-a60c-684a9b6568a6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:56.490562 kubelet[3060]: I0209 09:48:56.489659 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c7d6de51-9b66-422e-a60c-684a9b6568a6" (UID: "c7d6de51-9b66-422e-a60c-684a9b6568a6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:56.490562 kubelet[3060]: I0209 09:48:56.489752 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c7d6de51-9b66-422e-a60c-684a9b6568a6" (UID: "c7d6de51-9b66-422e-a60c-684a9b6568a6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:56.490562 kubelet[3060]: I0209 09:48:56.489826 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c7d6de51-9b66-422e-a60c-684a9b6568a6" (UID: "c7d6de51-9b66-422e-a60c-684a9b6568a6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:56.491066 kubelet[3060]: I0209 09:48:56.489857 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c7d6de51-9b66-422e-a60c-684a9b6568a6" (UID: "c7d6de51-9b66-422e-a60c-684a9b6568a6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:56.491066 kubelet[3060]: I0209 09:48:56.490441 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-hostproc" (OuterVolumeSpecName: "hostproc") pod "c7d6de51-9b66-422e-a60c-684a9b6568a6" (UID: "c7d6de51-9b66-422e-a60c-684a9b6568a6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:56.491066 kubelet[3060]: I0209 09:48:56.489208 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c7d6de51-9b66-422e-a60c-684a9b6568a6" (UID: "c7d6de51-9b66-422e-a60c-684a9b6568a6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:56.491654 kubelet[3060]: I0209 09:48:56.491340 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c7d6de51-9b66-422e-a60c-684a9b6568a6" (UID: "c7d6de51-9b66-422e-a60c-684a9b6568a6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:56.491654 kubelet[3060]: I0209 09:48:56.491415 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c7d6de51-9b66-422e-a60c-684a9b6568a6" (UID: "c7d6de51-9b66-422e-a60c-684a9b6568a6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:56.492084 kubelet[3060]: W0209 09:48:56.492036 3060 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/c7d6de51-9b66-422e-a60c-684a9b6568a6/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:48:56.505661 kubelet[3060]: I0209 09:48:56.503534 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c7d6de51-9b66-422e-a60c-684a9b6568a6" (UID: "c7d6de51-9b66-422e-a60c-684a9b6568a6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:48:56.507768 kubelet[3060]: I0209 09:48:56.506809 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d6de51-9b66-422e-a60c-684a9b6568a6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c7d6de51-9b66-422e-a60c-684a9b6568a6" (UID: "c7d6de51-9b66-422e-a60c-684a9b6568a6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:56.516952 kubelet[3060]: I0209 09:48:56.516222 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d6de51-9b66-422e-a60c-684a9b6568a6-kube-api-access-cx2wp" (OuterVolumeSpecName: "kube-api-access-cx2wp") pod "c7d6de51-9b66-422e-a60c-684a9b6568a6" (UID: "c7d6de51-9b66-422e-a60c-684a9b6568a6"). InnerVolumeSpecName "kube-api-access-cx2wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:56.523354 kubelet[3060]: I0209 09:48:56.523287 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c7d6de51-9b66-422e-a60c-684a9b6568a6" (UID: "c7d6de51-9b66-422e-a60c-684a9b6568a6"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:48:56.526212 kubelet[3060]: I0209 09:48:56.526142 3060 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7d6de51-9b66-422e-a60c-684a9b6568a6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c7d6de51-9b66-422e-a60c-684a9b6568a6" (UID: "c7d6de51-9b66-422e-a60c-684a9b6568a6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:48:56.590774 kubelet[3060]: I0209 09:48:56.590712 3060 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7d6de51-9b66-422e-a60c-684a9b6568a6-hubble-tls\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:56.590774 kubelet[3060]: I0209 09:48:56.590775 3060 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-xtables-lock\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:56.591043 kubelet[3060]: I0209 09:48:56.590803 3060 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-etc-cni-netd\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:56.591043 kubelet[3060]: I0209 09:48:56.590829 3060 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-cgroup\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:56.591043 kubelet[3060]: I0209 09:48:56.590857 3060 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-ipsec-secrets\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:56.591043 kubelet[3060]: I0209 09:48:56.590881 3060 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-lib-modules\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:56.591043 kubelet[3060]: I0209 09:48:56.590907 3060 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-host-proc-sys-kernel\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:56.591043 kubelet[3060]: I0209 09:48:56.590932 3060 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-bpf-maps\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:56.591043 kubelet[3060]: I0209 09:48:56.590957 3060 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-cni-path\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:56.591043 kubelet[3060]: I0209 09:48:56.590982 3060 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-cx2wp\" (UniqueName: \"kubernetes.io/projected/c7d6de51-9b66-422e-a60c-684a9b6568a6-kube-api-access-cx2wp\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:56.591509 kubelet[3060]: I0209 09:48:56.591005 3060 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-hostproc\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:56.591509 kubelet[3060]: I0209 09:48:56.591031 3060 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-host-proc-sys-net\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:56.591509 kubelet[3060]: I0209 09:48:56.591055 3060 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-run\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:56.591509 kubelet[3060]: I0209 09:48:56.591081 3060 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7d6de51-9b66-422e-a60c-684a9b6568a6-clustermesh-secrets\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:56.591509 kubelet[3060]: I0209 09:48:56.591104 3060 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7d6de51-9b66-422e-a60c-684a9b6568a6-cilium-config-path\") on node \"ip-172-31-16-76\" DevicePath \"\"" Feb 9 09:48:56.635469 kubelet[3060]: I0209 09:48:56.635343 3060 setters.go:548] "Node became not ready" node="ip-172-31-16-76" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 09:48:56.635265596 +0000 UTC m=+143.606176632 LastTransitionTime:2024-02-09 09:48:56.635265596 +0000 UTC m=+143.606176632 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 09:48:56.677589 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3-shm.mount: Deactivated successfully. Feb 9 09:48:56.677903 systemd[1]: var-lib-kubelet-pods-c7d6de51\x2d9b66\x2d422e\x2da60c\x2d684a9b6568a6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcx2wp.mount: Deactivated successfully. Feb 9 09:48:56.678133 systemd[1]: var-lib-kubelet-pods-c7d6de51\x2d9b66\x2d422e\x2da60c\x2d684a9b6568a6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:48:56.678353 systemd[1]: var-lib-kubelet-pods-c7d6de51\x2d9b66\x2d422e\x2da60c\x2d684a9b6568a6-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 09:48:56.678644 systemd[1]: var-lib-kubelet-pods-c7d6de51\x2d9b66\x2d422e\x2da60c\x2d684a9b6568a6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:48:57.246382 kubelet[3060]: I0209 09:48:57.246332 3060 scope.go:115] "RemoveContainer" containerID="a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184" Feb 9 09:48:57.253290 env[1817]: time="2024-02-09T09:48:57.253171963Z" level=info msg="RemoveContainer for \"a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184\"" Feb 9 09:48:57.259739 env[1817]: time="2024-02-09T09:48:57.259657641Z" level=info msg="RemoveContainer for \"a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184\" returns successfully" Feb 9 09:48:57.260388 kubelet[3060]: I0209 09:48:57.260351 3060 scope.go:115] "RemoveContainer" containerID="a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184" Feb 9 09:48:57.261143 env[1817]: time="2024-02-09T09:48:57.260975925Z" level=error msg="ContainerStatus for \"a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184\": not found" Feb 9 09:48:57.263024 kubelet[3060]: E0209 09:48:57.262961 3060 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184\": not found" containerID="a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184" Feb 9 09:48:57.263376 kubelet[3060]: I0209 09:48:57.263328 3060 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184} err="failed to get container status \"a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184\": rpc error: code = NotFound desc = an error occurred when try to find container \"a1b46f1d36753021889a725bbaf9faaea205edf154379d63c14e5f037a341184\": not found" Feb 9 09:48:57.314630 kubelet[3060]: I0209 09:48:57.314563 3060 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:57.314937 kubelet[3060]: E0209 09:48:57.314910 3060 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c7d6de51-9b66-422e-a60c-684a9b6568a6" containerName="mount-cgroup" Feb 9 09:48:57.315099 kubelet[3060]: I0209 09:48:57.315078 3060 memory_manager.go:346] "RemoveStaleState removing state" podUID="c7d6de51-9b66-422e-a60c-684a9b6568a6" containerName="mount-cgroup" Feb 9 09:48:57.394823 kubelet[3060]: I0209 09:48:57.394782 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/990364db-2509-406c-ade2-8983916f78e9-hostproc\") pod \"cilium-wmz7p\" (UID: \"990364db-2509-406c-ade2-8983916f78e9\") " pod="kube-system/cilium-wmz7p" Feb 9 09:48:57.395112 kubelet[3060]: I0209 09:48:57.395087 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/990364db-2509-406c-ade2-8983916f78e9-bpf-maps\") pod \"cilium-wmz7p\" (UID: \"990364db-2509-406c-ade2-8983916f78e9\") " pod="kube-system/cilium-wmz7p" Feb 9 09:48:57.395284 kubelet[3060]: I0209 09:48:57.395262 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/990364db-2509-406c-ade2-8983916f78e9-cilium-cgroup\") pod \"cilium-wmz7p\" (UID: \"990364db-2509-406c-ade2-8983916f78e9\") " pod="kube-system/cilium-wmz7p" Feb 9 09:48:57.395458 kubelet[3060]: I0209 09:48:57.395438 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/990364db-2509-406c-ade2-8983916f78e9-cni-path\") pod \"cilium-wmz7p\" (UID: \"990364db-2509-406c-ade2-8983916f78e9\") " pod="kube-system/cilium-wmz7p" Feb 9 09:48:57.395653 kubelet[3060]: I0209 09:48:57.395597 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/990364db-2509-406c-ade2-8983916f78e9-lib-modules\") pod \"cilium-wmz7p\" (UID: \"990364db-2509-406c-ade2-8983916f78e9\") " pod="kube-system/cilium-wmz7p" Feb 9 09:48:57.395838 kubelet[3060]: I0209 09:48:57.395817 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/990364db-2509-406c-ade2-8983916f78e9-etc-cni-netd\") pod \"cilium-wmz7p\" (UID: \"990364db-2509-406c-ade2-8983916f78e9\") " pod="kube-system/cilium-wmz7p" Feb 9 09:48:57.396011 kubelet[3060]: I0209 09:48:57.395986 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/990364db-2509-406c-ade2-8983916f78e9-xtables-lock\") pod \"cilium-wmz7p\" (UID: \"990364db-2509-406c-ade2-8983916f78e9\") " pod="kube-system/cilium-wmz7p" Feb 9 09:48:57.396219 kubelet[3060]: I0209 09:48:57.396198 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/990364db-2509-406c-ade2-8983916f78e9-clustermesh-secrets\") pod \"cilium-wmz7p\" (UID: \"990364db-2509-406c-ade2-8983916f78e9\") " pod="kube-system/cilium-wmz7p" Feb 9 09:48:57.396375 kubelet[3060]: I0209 09:48:57.396355 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/990364db-2509-406c-ade2-8983916f78e9-cilium-ipsec-secrets\") pod \"cilium-wmz7p\" (UID: \"990364db-2509-406c-ade2-8983916f78e9\") " pod="kube-system/cilium-wmz7p" Feb 9 09:48:57.396527 kubelet[3060]: I0209 09:48:57.396507 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/990364db-2509-406c-ade2-8983916f78e9-host-proc-sys-net\") pod \"cilium-wmz7p\" (UID: \"990364db-2509-406c-ade2-8983916f78e9\") " pod="kube-system/cilium-wmz7p" Feb 9 09:48:57.396729 kubelet[3060]: I0209 09:48:57.396692 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/990364db-2509-406c-ade2-8983916f78e9-hubble-tls\") pod \"cilium-wmz7p\" (UID: \"990364db-2509-406c-ade2-8983916f78e9\") " pod="kube-system/cilium-wmz7p" Feb 9 09:48:57.396925 kubelet[3060]: I0209 09:48:57.396890 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/990364db-2509-406c-ade2-8983916f78e9-cilium-run\") pod \"cilium-wmz7p\" (UID: \"990364db-2509-406c-ade2-8983916f78e9\") " pod="kube-system/cilium-wmz7p" Feb 9 09:48:57.397116 kubelet[3060]: I0209 09:48:57.397096 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/990364db-2509-406c-ade2-8983916f78e9-cilium-config-path\") pod \"cilium-wmz7p\" (UID: \"990364db-2509-406c-ade2-8983916f78e9\") " pod="kube-system/cilium-wmz7p" Feb 9 09:48:57.397276 kubelet[3060]: I0209 09:48:57.397256 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/990364db-2509-406c-ade2-8983916f78e9-host-proc-sys-kernel\") pod \"cilium-wmz7p\" (UID: \"990364db-2509-406c-ade2-8983916f78e9\") " pod="kube-system/cilium-wmz7p" Feb 9 09:48:57.397449 kubelet[3060]: I0209 09:48:57.397427 3060 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrdp9\" (UniqueName: \"kubernetes.io/projected/990364db-2509-406c-ade2-8983916f78e9-kube-api-access-xrdp9\") pod \"cilium-wmz7p\" (UID: \"990364db-2509-406c-ade2-8983916f78e9\") " pod="kube-system/cilium-wmz7p" Feb 9 09:48:57.638322 env[1817]: time="2024-02-09T09:48:57.637964125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wmz7p,Uid:990364db-2509-406c-ade2-8983916f78e9,Namespace:kube-system,Attempt:0,}" Feb 9 09:48:57.666019 env[1817]: time="2024-02-09T09:48:57.665874152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:57.666019 env[1817]: time="2024-02-09T09:48:57.665950976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:57.666407 env[1817]: time="2024-02-09T09:48:57.666313089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:57.666921 env[1817]: time="2024-02-09T09:48:57.666833601Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/66313345d771769ce147ad026c251a303eb54bf5bdcfe86f88545d14d29f3318 pid=5157 runtime=io.containerd.runc.v2 Feb 9 09:48:57.753138 env[1817]: time="2024-02-09T09:48:57.753061147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wmz7p,Uid:990364db-2509-406c-ade2-8983916f78e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"66313345d771769ce147ad026c251a303eb54bf5bdcfe86f88545d14d29f3318\"" Feb 9 09:48:57.754001 kubelet[3060]: I0209 09:48:57.753945 3060 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=c7d6de51-9b66-422e-a60c-684a9b6568a6 path="/var/lib/kubelet/pods/c7d6de51-9b66-422e-a60c-684a9b6568a6/volumes" Feb 9 09:48:57.761757 env[1817]: time="2024-02-09T09:48:57.761687518Z" level=info msg="CreateContainer within sandbox \"66313345d771769ce147ad026c251a303eb54bf5bdcfe86f88545d14d29f3318\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:48:57.782722 env[1817]: time="2024-02-09T09:48:57.782554611Z" level=info msg="CreateContainer within sandbox \"66313345d771769ce147ad026c251a303eb54bf5bdcfe86f88545d14d29f3318\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"281bb2c3215ddffbe6286b21c68ddb12c029eebc17493f4cbceb4d351a6629a4\"" Feb 9 09:48:57.785163 env[1817]: time="2024-02-09T09:48:57.785099320Z" level=info msg="StartContainer for \"281bb2c3215ddffbe6286b21c68ddb12c029eebc17493f4cbceb4d351a6629a4\"" Feb 9 09:48:57.921435 env[1817]: time="2024-02-09T09:48:57.920881312Z" level=info msg="StartContainer for \"281bb2c3215ddffbe6286b21c68ddb12c029eebc17493f4cbceb4d351a6629a4\" returns successfully" Feb 9 09:48:57.975792 env[1817]: time="2024-02-09T09:48:57.975714342Z" level=info msg="shim disconnected" id=281bb2c3215ddffbe6286b21c68ddb12c029eebc17493f4cbceb4d351a6629a4 Feb 9 09:48:57.976136 env[1817]: time="2024-02-09T09:48:57.976102075Z" level=warning msg="cleaning up after shim disconnected" id=281bb2c3215ddffbe6286b21c68ddb12c029eebc17493f4cbceb4d351a6629a4 namespace=k8s.io Feb 9 09:48:57.976282 env[1817]: time="2024-02-09T09:48:57.976253779Z" level=info msg="cleaning up dead shim" Feb 9 09:48:57.990154 env[1817]: time="2024-02-09T09:48:57.990096226Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5239 runtime=io.containerd.runc.v2\n" Feb 9 09:48:58.256391 env[1817]: time="2024-02-09T09:48:58.256271920Z" level=info msg="CreateContainer within sandbox \"66313345d771769ce147ad026c251a303eb54bf5bdcfe86f88545d14d29f3318\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:48:58.284565 env[1817]: time="2024-02-09T09:48:58.284503236Z" level=info msg="CreateContainer within sandbox \"66313345d771769ce147ad026c251a303eb54bf5bdcfe86f88545d14d29f3318\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"027b273547cd943dee4a6ba4108597d7f0b1b222e07b587ab093217284ae2ec3\"" Feb 9 09:48:58.285510 env[1817]: time="2024-02-09T09:48:58.285459685Z" level=info msg="StartContainer for \"027b273547cd943dee4a6ba4108597d7f0b1b222e07b587ab093217284ae2ec3\"" Feb 9 09:48:58.397742 env[1817]: time="2024-02-09T09:48:58.397665695Z" level=info msg="StartContainer for \"027b273547cd943dee4a6ba4108597d7f0b1b222e07b587ab093217284ae2ec3\" returns successfully" Feb 9 09:48:58.459453 env[1817]: time="2024-02-09T09:48:58.459392406Z" level=info msg="shim disconnected" id=027b273547cd943dee4a6ba4108597d7f0b1b222e07b587ab093217284ae2ec3 Feb 9 09:48:58.459891 env[1817]: time="2024-02-09T09:48:58.459855930Z" level=warning msg="cleaning up after shim disconnected" id=027b273547cd943dee4a6ba4108597d7f0b1b222e07b587ab093217284ae2ec3 namespace=k8s.io Feb 9 09:48:58.460017 env[1817]: time="2024-02-09T09:48:58.459989214Z" level=info msg="cleaning up dead shim" Feb 9 09:48:58.474195 env[1817]: time="2024-02-09T09:48:58.474137519Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5301 runtime=io.containerd.runc.v2\n" Feb 9 09:48:58.679988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-281bb2c3215ddffbe6286b21c68ddb12c029eebc17493f4cbceb4d351a6629a4-rootfs.mount: Deactivated successfully. Feb 9 09:48:58.777338 kubelet[3060]: E0209 09:48:58.777236 3060 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:48:59.268075 env[1817]: time="2024-02-09T09:48:59.268012666Z" level=info msg="CreateContainer within sandbox \"66313345d771769ce147ad026c251a303eb54bf5bdcfe86f88545d14d29f3318\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:48:59.301103 env[1817]: time="2024-02-09T09:48:59.301017406Z" level=info msg="CreateContainer within sandbox \"66313345d771769ce147ad026c251a303eb54bf5bdcfe86f88545d14d29f3318\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cea87cef27953e81645eafd8243569d77c80b60b20825dd392cc61b4c44390b7\"" Feb 9 09:48:59.310310 env[1817]: time="2024-02-09T09:48:59.308541936Z" level=info msg="StartContainer for \"cea87cef27953e81645eafd8243569d77c80b60b20825dd392cc61b4c44390b7\"" Feb 9 09:48:59.425948 env[1817]: time="2024-02-09T09:48:59.425884433Z" level=info msg="StartContainer for \"cea87cef27953e81645eafd8243569d77c80b60b20825dd392cc61b4c44390b7\" returns successfully" Feb 9 09:48:59.475176 env[1817]: time="2024-02-09T09:48:59.475115602Z" level=info msg="shim disconnected" id=cea87cef27953e81645eafd8243569d77c80b60b20825dd392cc61b4c44390b7 Feb 9 09:48:59.475661 env[1817]: time="2024-02-09T09:48:59.475595507Z" level=warning msg="cleaning up after shim disconnected" id=cea87cef27953e81645eafd8243569d77c80b60b20825dd392cc61b4c44390b7 namespace=k8s.io Feb 9 09:48:59.475836 env[1817]: time="2024-02-09T09:48:59.475803467Z" level=info msg="cleaning up dead shim" Feb 9 09:48:59.490012 env[1817]: time="2024-02-09T09:48:59.489953968Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5359 runtime=io.containerd.runc.v2\n" Feb 9 09:48:59.679068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cea87cef27953e81645eafd8243569d77c80b60b20825dd392cc61b4c44390b7-rootfs.mount: Deactivated successfully. Feb 9 09:49:00.271800 env[1817]: time="2024-02-09T09:49:00.271748623Z" level=info msg="CreateContainer within sandbox \"66313345d771769ce147ad026c251a303eb54bf5bdcfe86f88545d14d29f3318\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:49:00.320541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount741833674.mount: Deactivated successfully. Feb 9 09:49:00.340695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2325004728.mount: Deactivated successfully. Feb 9 09:49:00.342028 env[1817]: time="2024-02-09T09:49:00.341901095Z" level=info msg="CreateContainer within sandbox \"66313345d771769ce147ad026c251a303eb54bf5bdcfe86f88545d14d29f3318\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f89a68f30d0ff0137c60dc425a6354928e110986b7098b66f7d6f176ffbb105f\"" Feb 9 09:49:00.345255 env[1817]: time="2024-02-09T09:49:00.344228592Z" level=info msg="StartContainer for \"f89a68f30d0ff0137c60dc425a6354928e110986b7098b66f7d6f176ffbb105f\"" Feb 9 09:49:00.445703 env[1817]: time="2024-02-09T09:49:00.445632027Z" level=info msg="StartContainer for \"f89a68f30d0ff0137c60dc425a6354928e110986b7098b66f7d6f176ffbb105f\" returns successfully" Feb 9 09:49:00.483793 env[1817]: time="2024-02-09T09:49:00.483732450Z" level=info msg="shim disconnected" id=f89a68f30d0ff0137c60dc425a6354928e110986b7098b66f7d6f176ffbb105f Feb 9 09:49:00.484347 env[1817]: time="2024-02-09T09:49:00.484311078Z" level=warning msg="cleaning up after shim disconnected" id=f89a68f30d0ff0137c60dc425a6354928e110986b7098b66f7d6f176ffbb105f namespace=k8s.io Feb 9 09:49:00.484518 env[1817]: time="2024-02-09T09:49:00.484489506Z" level=info msg="cleaning up dead shim" Feb 9 09:49:00.500762 env[1817]: time="2024-02-09T09:49:00.500705761Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:49:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5415 runtime=io.containerd.runc.v2\n" Feb 9 09:49:01.293973 env[1817]: time="2024-02-09T09:49:01.293882825Z" level=info msg="CreateContainer within sandbox \"66313345d771769ce147ad026c251a303eb54bf5bdcfe86f88545d14d29f3318\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:49:01.337596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380237246.mount: Deactivated successfully. Feb 9 09:49:01.344521 env[1817]: time="2024-02-09T09:49:01.344451591Z" level=info msg="CreateContainer within sandbox \"66313345d771769ce147ad026c251a303eb54bf5bdcfe86f88545d14d29f3318\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d20ff60313780d2154b581c0ea9cfe9c5a7eae95bc2d244addd6f6e72cc347da\"" Feb 9 09:49:01.345581 env[1817]: time="2024-02-09T09:49:01.345505323Z" level=info msg="StartContainer for \"d20ff60313780d2154b581c0ea9cfe9c5a7eae95bc2d244addd6f6e72cc347da\"" Feb 9 09:49:01.473239 env[1817]: time="2024-02-09T09:49:01.473172790Z" level=info msg="StartContainer for \"d20ff60313780d2154b581c0ea9cfe9c5a7eae95bc2d244addd6f6e72cc347da\" returns successfully" Feb 9 09:49:02.185982 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 09:49:05.286993 systemd[1]: run-containerd-runc-k8s.io-d20ff60313780d2154b581c0ea9cfe9c5a7eae95bc2d244addd6f6e72cc347da-runc.IA1k0s.mount: Deactivated successfully. Feb 9 09:49:06.069072 systemd-networkd[1594]: lxc_health: Link UP Feb 9 09:49:06.079885 (udev-worker)[5979]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:49:06.107659 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:49:06.104757 systemd-networkd[1594]: lxc_health: Gained carrier Feb 9 09:49:07.678464 kubelet[3060]: I0209 09:49:07.678393 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-wmz7p" podStartSLOduration=10.678338899 pod.CreationTimestamp="2024-02-09 09:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:49:02.314780922 +0000 UTC m=+149.285692006" watchObservedRunningTime="2024-02-09 09:49:07.678338899 +0000 UTC m=+154.649249971" Feb 9 09:49:08.003296 systemd-networkd[1594]: lxc_health: Gained IPv6LL Feb 9 09:49:12.277244 systemd[1]: run-containerd-runc-k8s.io-d20ff60313780d2154b581c0ea9cfe9c5a7eae95bc2d244addd6f6e72cc347da-runc.tGoaaF.mount: Deactivated successfully. Feb 9 09:49:12.428364 sshd[5074]: pam_unix(sshd:session): session closed for user core Feb 9 09:49:12.435980 systemd[1]: sshd@27-172.31.16.76:22-139.178.89.65:60398.service: Deactivated successfully. Feb 9 09:49:12.439264 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 09:49:12.440074 systemd-logind[1794]: Session 28 logged out. Waiting for processes to exit. Feb 9 09:49:12.443239 systemd-logind[1794]: Removed session 28. Feb 9 09:49:26.009685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b120519ce6f48968869c98223cb3aa699a376d5a5d5883edc446e3a8436cf1aa-rootfs.mount: Deactivated successfully. Feb 9 09:49:26.051421 env[1817]: time="2024-02-09T09:49:26.051005381Z" level=info msg="shim disconnected" id=b120519ce6f48968869c98223cb3aa699a376d5a5d5883edc446e3a8436cf1aa Feb 9 09:49:26.051421 env[1817]: time="2024-02-09T09:49:26.051075294Z" level=warning msg="cleaning up after shim disconnected" id=b120519ce6f48968869c98223cb3aa699a376d5a5d5883edc446e3a8436cf1aa namespace=k8s.io Feb 9 09:49:26.051421 env[1817]: time="2024-02-09T09:49:26.051097122Z" level=info msg="cleaning up dead shim" Feb 9 09:49:26.066267 env[1817]: time="2024-02-09T09:49:26.066209962Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:49:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6096 runtime=io.containerd.runc.v2\n" Feb 9 09:49:26.346921 kubelet[3060]: I0209 09:49:26.346324 3060 scope.go:115] "RemoveContainer" containerID="b120519ce6f48968869c98223cb3aa699a376d5a5d5883edc446e3a8436cf1aa" Feb 9 09:49:26.350963 env[1817]: time="2024-02-09T09:49:26.350910393Z" level=info msg="CreateContainer within sandbox \"57520e5674700b14c1d880529f18d3e68620333591c1b04b000bcd21287fc04f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 09:49:26.373262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342174776.mount: Deactivated successfully. Feb 9 09:49:26.384161 env[1817]: time="2024-02-09T09:49:26.384087393Z" level=info msg="CreateContainer within sandbox \"57520e5674700b14c1d880529f18d3e68620333591c1b04b000bcd21287fc04f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b31a43a451b9554f405db0ed50d4cacb5316a38fac4a5db9ef8bc2f6b6344789\"" Feb 9 09:49:26.385062 env[1817]: time="2024-02-09T09:49:26.385017562Z" level=info msg="StartContainer for \"b31a43a451b9554f405db0ed50d4cacb5316a38fac4a5db9ef8bc2f6b6344789\"" Feb 9 09:49:26.516117 env[1817]: time="2024-02-09T09:49:26.516030153Z" level=info msg="StartContainer for \"b31a43a451b9554f405db0ed50d4cacb5316a38fac4a5db9ef8bc2f6b6344789\" returns successfully" Feb 9 09:49:26.816637 kubelet[3060]: E0209 09:49:26.816311 3060 controller.go:189] failed to update lease, error: Put "https://172.31.16.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-76?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 09:49:31.669795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dbeb7516d9290f93c1938676d2430ec1f2280ea10c3921862e1be4c768db9ae-rootfs.mount: Deactivated successfully. Feb 9 09:49:31.682104 env[1817]: time="2024-02-09T09:49:31.682041651Z" level=info msg="shim disconnected" id=3dbeb7516d9290f93c1938676d2430ec1f2280ea10c3921862e1be4c768db9ae Feb 9 09:49:31.682974 env[1817]: time="2024-02-09T09:49:31.682922383Z" level=warning msg="cleaning up after shim disconnected" id=3dbeb7516d9290f93c1938676d2430ec1f2280ea10c3921862e1be4c768db9ae namespace=k8s.io Feb 9 09:49:31.683121 env[1817]: time="2024-02-09T09:49:31.683093528Z" level=info msg="cleaning up dead shim" Feb 9 09:49:31.697766 env[1817]: time="2024-02-09T09:49:31.697701640Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:49:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6157 runtime=io.containerd.runc.v2\n" Feb 9 09:49:32.367431 kubelet[3060]: I0209 09:49:32.366819 3060 scope.go:115] "RemoveContainer" containerID="3dbeb7516d9290f93c1938676d2430ec1f2280ea10c3921862e1be4c768db9ae" Feb 9 09:49:32.370344 env[1817]: time="2024-02-09T09:49:32.370290488Z" level=info msg="CreateContainer within sandbox \"8b17a102e25832ec06d73ed4ffaec37097b3c0e541bf29fb8ab138ca86e9266f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 09:49:32.390129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2353940455.mount: Deactivated successfully. Feb 9 09:49:32.402814 env[1817]: time="2024-02-09T09:49:32.402728809Z" level=info msg="CreateContainer within sandbox \"8b17a102e25832ec06d73ed4ffaec37097b3c0e541bf29fb8ab138ca86e9266f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"17435bc8673dd72d8ff475a8bfce11d3c63272545d112f03c2484735af22b8f5\"" Feb 9 09:49:32.403550 env[1817]: time="2024-02-09T09:49:32.403508621Z" level=info msg="StartContainer for \"17435bc8673dd72d8ff475a8bfce11d3c63272545d112f03c2484735af22b8f5\"" Feb 9 09:49:32.528112 env[1817]: time="2024-02-09T09:49:32.528018615Z" level=info msg="StartContainer for \"17435bc8673dd72d8ff475a8bfce11d3c63272545d112f03c2484735af22b8f5\" returns successfully" Feb 9 09:49:33.403273 env[1817]: time="2024-02-09T09:49:33.403220364Z" level=info msg="StopPodSandbox for \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\"" Feb 9 09:49:33.404133 env[1817]: time="2024-02-09T09:49:33.404064892Z" level=info msg="TearDown network for sandbox \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" successfully" Feb 9 09:49:33.404272 env[1817]: time="2024-02-09T09:49:33.404239049Z" level=info msg="StopPodSandbox for \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" returns successfully" Feb 9 09:49:33.405068 env[1817]: time="2024-02-09T09:49:33.405018008Z" level=info msg="RemovePodSandbox for \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\"" Feb 9 09:49:33.405217 env[1817]: time="2024-02-09T09:49:33.405077084Z" level=info msg="Forcibly stopping sandbox \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\"" Feb 9 09:49:33.405314 env[1817]: time="2024-02-09T09:49:33.405209301Z" level=info msg="TearDown network for sandbox \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" successfully" Feb 9 09:49:33.409799 env[1817]: time="2024-02-09T09:49:33.409734198Z" level=info msg="RemovePodSandbox \"156e8e5781f1b4ca0d9764f30566edc4b1cc764d06f77f5282a973e1a9ee335f\" returns successfully" Feb 9 09:49:33.410492 env[1817]: time="2024-02-09T09:49:33.410451753Z" level=info msg="StopPodSandbox for \"564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f\"" Feb 9 09:49:33.410793 env[1817]: time="2024-02-09T09:49:33.410730694Z" level=info msg="TearDown network for sandbox \"564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f\" successfully" Feb 9 09:49:33.410948 env[1817]: time="2024-02-09T09:49:33.410915615Z" level=info msg="StopPodSandbox for \"564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f\" returns successfully" Feb 9 09:49:33.411585 env[1817]: time="2024-02-09T09:49:33.411548282Z" level=info msg="RemovePodSandbox for \"564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f\"" Feb 9 09:49:33.411826 env[1817]: time="2024-02-09T09:49:33.411757911Z" level=info msg="Forcibly stopping sandbox \"564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f\"" Feb 9 09:49:33.412492 env[1817]: time="2024-02-09T09:49:33.412416390Z" level=info msg="TearDown network for sandbox \"564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f\" successfully" Feb 9 09:49:33.430263 env[1817]: time="2024-02-09T09:49:33.430204803Z" level=info msg="RemovePodSandbox \"564e926dc88dc6b4f72da20ea0b7c48383d337adcbb62290cf70ba218129d56f\" returns successfully" Feb 9 09:49:33.431447 env[1817]: time="2024-02-09T09:49:33.431405169Z" level=info msg="StopPodSandbox for \"1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3\"" Feb 9 09:49:33.431784 env[1817]: time="2024-02-09T09:49:33.431719690Z" level=info msg="TearDown network for sandbox \"1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3\" successfully" Feb 9 09:49:33.431917 env[1817]: time="2024-02-09T09:49:33.431885447Z" level=info msg="StopPodSandbox for \"1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3\" returns successfully" Feb 9 09:49:33.432542 env[1817]: time="2024-02-09T09:49:33.432506102Z" level=info msg="RemovePodSandbox for \"1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3\"" Feb 9 09:49:33.432788 env[1817]: time="2024-02-09T09:49:33.432732555Z" level=info msg="Forcibly stopping sandbox \"1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3\"" Feb 9 09:49:33.433007 env[1817]: time="2024-02-09T09:49:33.432973792Z" level=info msg="TearDown network for sandbox \"1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3\" successfully" Feb 9 09:49:33.438401 env[1817]: time="2024-02-09T09:49:33.438331397Z" level=info msg="RemovePodSandbox \"1425bb07fe98d055e2f6c82f87d692b587fbe5e27c53303d1ca2a279ff8457d3\" returns successfully" Feb 9 09:49:36.816957 kubelet[3060]: E0209 09:49:36.816870 3060 controller.go:189] failed to update lease, error: Put "https://172.31.16.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-76?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 09:49:46.817431 kubelet[3060]: E0209 09:49:46.817373 3060 controller.go:189] failed to update lease, error: Put "https://172.31.16.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-76?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)