Feb 9 09:46:24.018138 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 9 09:46:24.018179 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:46:24.018205 kernel: efi: EFI v2.70 by EDK II Feb 9 09:46:24.018221 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71a8cf98 Feb 9 09:46:24.018235 kernel: ACPI: Early table checksum verification disabled Feb 9 09:46:24.018249 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 9 09:46:24.018266 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 9 09:46:24.018281 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 09:46:24.018295 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 9 09:46:24.018310 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 09:46:24.018329 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 9 09:46:24.018343 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 9 09:46:24.018357 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 9 09:46:24.018371 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 09:46:24.018387 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 9 09:46:24.018406 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 9 09:46:24.018421 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 9 09:46:24.018435 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 9 09:46:24.018459 kernel: printk: bootconsole [uart0] enabled Feb 9 09:46:24.018474 kernel: NUMA: Failed to initialise from firmware Feb 9 09:46:24.018490 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 09:46:24.018505 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Feb 9 09:46:24.018520 kernel: Zone ranges: Feb 9 09:46:24.018534 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 9 09:46:24.018585 kernel: DMA32 empty Feb 9 09:46:24.022458 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 9 09:46:24.022486 kernel: Movable zone start for each node Feb 9 09:46:24.022502 kernel: Early memory node ranges Feb 9 09:46:24.022518 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Feb 9 09:46:24.022533 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 9 09:46:24.022548 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 9 09:46:24.022562 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 9 09:46:24.022606 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 9 09:46:24.022621 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 9 09:46:24.022636 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 09:46:24.022651 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 9 09:46:24.022666 kernel: psci: probing for conduit method from ACPI. Feb 9 09:46:24.022698 kernel: psci: PSCIv1.0 detected in firmware. Feb 9 09:46:24.022720 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:46:24.022735 kernel: psci: Trusted OS migration not required Feb 9 09:46:24.022756 kernel: psci: SMC Calling Convention v1.1 Feb 9 09:46:24.022772 kernel: ACPI: SRAT not present Feb 9 09:46:24.022788 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:46:24.022808 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:46:24.022824 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 09:46:24.022840 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:46:24.022855 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:46:24.022883 kernel: CPU features: detected: Spectre-v2 Feb 9 09:46:24.022900 kernel: CPU features: detected: Spectre-v3a Feb 9 09:46:24.022916 kernel: CPU features: detected: Spectre-BHB Feb 9 09:46:24.022930 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:46:24.022966 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:46:24.022983 kernel: CPU features: detected: ARM erratum 1742098 Feb 9 09:46:24.022998 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 9 09:46:24.023020 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 9 09:46:24.023035 kernel: Policy zone: Normal Feb 9 09:46:24.023053 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:46:24.023070 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:46:24.023086 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:46:24.023101 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:46:24.023117 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:46:24.023152 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 9 09:46:24.023170 kernel: Memory: 3826316K/4030464K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 204148K reserved, 0K cma-reserved) Feb 9 09:46:24.023185 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 09:46:24.023206 kernel: trace event string verifier disabled Feb 9 09:46:24.023222 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:46:24.023238 kernel: rcu: RCU event tracing is enabled. Feb 9 09:46:24.023254 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 09:46:24.023270 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:46:24.023286 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:46:24.023301 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:46:24.023316 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 09:46:24.023332 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:46:24.023347 kernel: GICv3: 96 SPIs implemented Feb 9 09:46:24.023362 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:46:24.023377 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:46:24.023396 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:46:24.023411 kernel: GICv3: 16 PPIs implemented Feb 9 09:46:24.023426 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 9 09:46:24.023441 kernel: ACPI: SRAT not present Feb 9 09:46:24.023456 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 9 09:46:24.023471 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 09:46:24.023487 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Feb 9 09:46:24.023502 kernel: GICv3: using LPI property table @0x00000004000c0000 Feb 9 09:46:24.023517 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 9 09:46:24.023533 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Feb 9 09:46:24.023548 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 9 09:46:24.023587 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 9 09:46:24.023607 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 9 09:46:24.023623 kernel: Console: colour dummy device 80x25 Feb 9 09:46:24.023639 kernel: printk: console [tty1] enabled Feb 9 09:46:24.023654 kernel: ACPI: Core revision 20210730 Feb 9 09:46:24.023670 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 9 09:46:24.023686 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:46:24.023702 kernel: LSM: Security Framework initializing Feb 9 09:46:24.023717 kernel: SELinux: Initializing. Feb 9 09:46:24.023733 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:46:24.023754 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:46:24.023770 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:46:24.023785 kernel: Platform MSI: ITS@0x10080000 domain created Feb 9 09:46:24.023801 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 9 09:46:24.023816 kernel: Remapping and enabling EFI services. Feb 9 09:46:24.023832 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:46:24.023847 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:46:24.023863 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 9 09:46:24.023879 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Feb 9 09:46:24.023899 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 9 09:46:24.023914 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 09:46:24.023929 kernel: SMP: Total of 2 processors activated. Feb 9 09:46:24.023945 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:46:24.023960 kernel: CPU features: detected: 32-bit EL1 Support Feb 9 09:46:24.023976 kernel: CPU features: detected: CRC32 instructions Feb 9 09:46:24.023992 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:46:24.024007 kernel: alternatives: patching kernel code Feb 9 09:46:24.024023 kernel: devtmpfs: initialized Feb 9 09:46:24.024042 kernel: KASLR disabled due to lack of seed Feb 9 09:46:24.024058 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:46:24.024074 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 09:46:24.024100 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:46:24.024121 kernel: SMBIOS 3.0.0 present. Feb 9 09:46:24.024137 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 9 09:46:24.024153 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:46:24.024175 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:46:24.024198 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:46:24.024221 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:46:24.024243 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:46:24.024260 kernel: audit: type=2000 audit(0.248:1): state=initialized audit_enabled=0 res=1 Feb 9 09:46:24.024280 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:46:24.024296 kernel: cpuidle: using governor menu Feb 9 09:46:24.024312 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:46:24.024329 kernel: ASID allocator initialised with 32768 entries Feb 9 09:46:24.024345 kernel: ACPI: bus type PCI registered Feb 9 09:46:24.024365 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:46:24.024381 kernel: Serial: AMBA PL011 UART driver Feb 9 09:46:24.024398 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:46:24.024414 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:46:24.024430 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:46:24.024446 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:46:24.024462 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:46:24.024479 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:46:24.024495 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:46:24.024515 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:46:24.024531 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:46:24.024547 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:46:24.024581 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:46:24.024604 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:46:24.024621 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:46:24.024638 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:46:24.024654 kernel: ACPI: Interpreter enabled Feb 9 09:46:24.024673 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:46:24.024697 kernel: ACPI: MCFG table detected, 1 entries Feb 9 09:46:24.024714 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 9 09:46:24.024997 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 09:46:24.025201 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 09:46:24.025422 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 09:46:24.027725 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 9 09:46:24.027942 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 9 09:46:24.027972 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 9 09:46:24.027991 kernel: acpiphp: Slot [1] registered Feb 9 09:46:24.028008 kernel: acpiphp: Slot [2] registered Feb 9 09:46:24.028024 kernel: acpiphp: Slot [3] registered Feb 9 09:46:24.028041 kernel: acpiphp: Slot [4] registered Feb 9 09:46:24.028058 kernel: acpiphp: Slot [5] registered Feb 9 09:46:24.028074 kernel: acpiphp: Slot [6] registered Feb 9 09:46:24.028090 kernel: acpiphp: Slot [7] registered Feb 9 09:46:24.028106 kernel: acpiphp: Slot [8] registered Feb 9 09:46:24.028126 kernel: acpiphp: Slot [9] registered Feb 9 09:46:24.028143 kernel: acpiphp: Slot [10] registered Feb 9 09:46:24.028159 kernel: acpiphp: Slot [11] registered Feb 9 09:46:24.028175 kernel: acpiphp: Slot [12] registered Feb 9 09:46:24.028191 kernel: acpiphp: Slot [13] registered Feb 9 09:46:24.028207 kernel: acpiphp: Slot [14] registered Feb 9 09:46:24.028224 kernel: acpiphp: Slot [15] registered Feb 9 09:46:24.028240 kernel: acpiphp: Slot [16] registered Feb 9 09:46:24.028256 kernel: acpiphp: Slot [17] registered Feb 9 09:46:24.028272 kernel: acpiphp: Slot [18] registered Feb 9 09:46:24.028292 kernel: acpiphp: Slot [19] registered Feb 9 09:46:24.028308 kernel: acpiphp: Slot [20] registered Feb 9 09:46:24.028324 kernel: acpiphp: Slot [21] registered Feb 9 09:46:24.028340 kernel: acpiphp: Slot [22] registered Feb 9 09:46:24.028356 kernel: acpiphp: Slot [23] registered Feb 9 09:46:24.028372 kernel: acpiphp: Slot [24] registered Feb 9 09:46:24.028388 kernel: acpiphp: Slot [25] registered Feb 9 09:46:24.028404 kernel: acpiphp: Slot [26] registered Feb 9 09:46:24.028420 kernel: acpiphp: Slot [27] registered Feb 9 09:46:24.028441 kernel: acpiphp: Slot [28] registered Feb 9 09:46:24.028457 kernel: acpiphp: Slot [29] registered Feb 9 09:46:24.028473 kernel: acpiphp: Slot [30] registered Feb 9 09:46:24.028489 kernel: acpiphp: Slot [31] registered Feb 9 09:46:24.028505 kernel: PCI host bridge to bus 0000:00 Feb 9 09:46:24.028760 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 9 09:46:24.028950 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 09:46:24.029130 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 9 09:46:24.029328 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 9 09:46:24.029581 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 9 09:46:24.033757 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 9 09:46:24.034005 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 9 09:46:24.034267 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 09:46:24.034478 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 9 09:46:24.047788 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:46:24.048030 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 09:46:24.048232 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 9 09:46:24.048433 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 9 09:46:24.048659 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 9 09:46:24.048861 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:46:24.049065 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 9 09:46:24.049279 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 9 09:46:24.049484 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 9 09:46:24.051798 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 9 09:46:24.052050 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 9 09:46:24.052237 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 9 09:46:24.052422 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 09:46:24.053670 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 9 09:46:24.053704 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 09:46:24.053723 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 09:46:24.053740 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 09:46:24.053757 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 09:46:24.053773 kernel: iommu: Default domain type: Translated Feb 9 09:46:24.053790 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:46:24.053806 kernel: vgaarb: loaded Feb 9 09:46:24.053822 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:46:24.053838 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:46:24.053859 kernel: PTP clock support registered Feb 9 09:46:24.053876 kernel: Registered efivars operations Feb 9 09:46:24.053892 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:46:24.053908 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:46:24.053924 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:46:24.053940 kernel: pnp: PnP ACPI init Feb 9 09:46:24.054144 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 9 09:46:24.054169 kernel: pnp: PnP ACPI: found 1 devices Feb 9 09:46:24.054187 kernel: NET: Registered PF_INET protocol family Feb 9 09:46:24.054208 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:46:24.054225 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:46:24.054242 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:46:24.054258 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:46:24.054274 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:46:24.054291 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:46:24.054308 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:46:24.054324 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:46:24.054341 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:46:24.054362 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:46:24.054378 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 9 09:46:24.054394 kernel: kvm [1]: HYP mode not available Feb 9 09:46:24.054411 kernel: Initialise system trusted keyrings Feb 9 09:46:24.054442 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:46:24.054459 kernel: Key type asymmetric registered Feb 9 09:46:24.054476 kernel: Asymmetric key parser 'x509' registered Feb 9 09:46:24.054492 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:46:24.054508 kernel: io scheduler mq-deadline registered Feb 9 09:46:24.054530 kernel: io scheduler kyber registered Feb 9 09:46:24.054546 kernel: io scheduler bfq registered Feb 9 09:46:24.055827 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 9 09:46:24.055862 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 09:46:24.055880 kernel: ACPI: button: Power Button [PWRB] Feb 9 09:46:24.055896 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:46:24.055914 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 9 09:46:24.056113 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 9 09:46:24.056143 kernel: printk: console [ttyS0] disabled Feb 9 09:46:24.056161 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 9 09:46:24.056177 kernel: printk: console [ttyS0] enabled Feb 9 09:46:24.056193 kernel: printk: bootconsole [uart0] disabled Feb 9 09:46:24.056210 kernel: thunder_xcv, ver 1.0 Feb 9 09:46:24.056226 kernel: thunder_bgx, ver 1.0 Feb 9 09:46:24.056242 kernel: nicpf, ver 1.0 Feb 9 09:46:24.056258 kernel: nicvf, ver 1.0 Feb 9 09:46:24.056475 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:46:24.056732 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:46:23 UTC (1707471983) Feb 9 09:46:24.056758 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:46:24.056775 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:46:24.056792 kernel: Segment Routing with IPv6 Feb 9 09:46:24.056808 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:46:24.056824 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:46:24.056840 kernel: Key type dns_resolver registered Feb 9 09:46:24.056856 kernel: registered taskstats version 1 Feb 9 09:46:24.056878 kernel: Loading compiled-in X.509 certificates Feb 9 09:46:24.056895 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:46:24.056911 kernel: Key type .fscrypt registered Feb 9 09:46:24.056927 kernel: Key type fscrypt-provisioning registered Feb 9 09:46:24.056944 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:46:24.056960 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:46:24.056977 kernel: ima: No architecture policies found Feb 9 09:46:24.056993 kernel: Freeing unused kernel memory: 34688K Feb 9 09:46:24.057009 kernel: Run /init as init process Feb 9 09:46:24.057029 kernel: with arguments: Feb 9 09:46:24.057045 kernel: /init Feb 9 09:46:24.057061 kernel: with environment: Feb 9 09:46:24.057077 kernel: HOME=/ Feb 9 09:46:24.057093 kernel: TERM=linux Feb 9 09:46:24.057109 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:46:24.057130 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:46:24.057150 systemd[1]: Detected virtualization amazon. Feb 9 09:46:24.057173 systemd[1]: Detected architecture arm64. Feb 9 09:46:24.057190 systemd[1]: Running in initrd. Feb 9 09:46:24.057208 systemd[1]: No hostname configured, using default hostname. Feb 9 09:46:24.057224 systemd[1]: Hostname set to . Feb 9 09:46:24.057243 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:46:24.057260 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:46:24.057277 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:46:24.057307 systemd[1]: Reached target cryptsetup.target. Feb 9 09:46:24.057333 systemd[1]: Reached target paths.target. Feb 9 09:46:24.057351 systemd[1]: Reached target slices.target. Feb 9 09:46:24.057368 systemd[1]: Reached target swap.target. Feb 9 09:46:24.057386 systemd[1]: Reached target timers.target. Feb 9 09:46:24.057404 systemd[1]: Listening on iscsid.socket. Feb 9 09:46:24.057422 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:46:24.057439 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:46:24.057457 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:46:24.057479 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:46:24.057497 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:46:24.057514 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:46:24.057532 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:46:24.057549 systemd[1]: Reached target sockets.target. Feb 9 09:46:24.063212 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:46:24.063243 systemd[1]: Finished network-cleanup.service. Feb 9 09:46:24.063261 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:46:24.063279 systemd[1]: Starting systemd-journald.service... Feb 9 09:46:24.063305 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:46:24.063323 systemd[1]: Starting systemd-resolved.service... Feb 9 09:46:24.063340 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:46:24.063358 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:46:24.063376 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:46:24.063393 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:46:24.063411 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:46:24.063429 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:46:24.063446 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:46:24.063468 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:46:24.063487 kernel: audit: type=1130 audit(1707471984.038:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:24.063505 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:46:24.063526 systemd-journald[309]: Journal started Feb 9 09:46:24.063654 systemd-journald[309]: Runtime Journal (/run/log/journal/ec22fad25a7caef9442eed03e8a6b8a0) is 8.0M, max 75.4M, 67.4M free. Feb 9 09:46:24.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:23.980988 systemd-modules-load[310]: Inserted module 'overlay' Feb 9 09:46:24.024821 systemd-resolved[311]: Positive Trust Anchors: Feb 9 09:46:24.024836 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:46:24.075497 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:46:24.024895 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:46:24.098145 systemd-modules-load[310]: Inserted module 'br_netfilter' Feb 9 09:46:24.100729 kernel: Bridge firewalling registered Feb 9 09:46:24.107230 systemd[1]: Started systemd-journald.service. Feb 9 09:46:24.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:24.117608 kernel: audit: type=1130 audit(1707471984.105:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:24.117787 dracut-cmdline[326]: dracut-dracut-053 Feb 9 09:46:24.126778 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:46:24.152600 kernel: SCSI subsystem initialized Feb 9 09:46:24.176841 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:46:24.176917 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:46:24.182757 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:46:24.188546 systemd-modules-load[310]: Inserted module 'dm_multipath' Feb 9 09:46:24.191862 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:46:24.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:24.203556 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:46:24.208606 kernel: audit: type=1130 audit(1707471984.192:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:24.219294 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:46:24.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:24.231602 kernel: audit: type=1130 audit(1707471984.221:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:24.298608 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:46:24.312752 kernel: iscsi: registered transport (tcp) Feb 9 09:46:24.336687 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:46:24.336766 kernel: QLogic iSCSI HBA Driver Feb 9 09:46:24.517222 systemd-resolved[311]: Defaulting to hostname 'linux'. Feb 9 09:46:24.519728 kernel: random: crng init done Feb 9 09:46:24.521098 systemd[1]: Started systemd-resolved.service. Feb 9 09:46:24.524260 systemd[1]: Reached target nss-lookup.target. Feb 9 09:46:24.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:24.534607 kernel: audit: type=1130 audit(1707471984.522:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:24.543536 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:46:24.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:24.550744 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:46:24.562460 kernel: audit: type=1130 audit(1707471984.547:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:24.615624 kernel: raid6: neonx8 gen() 6361 MB/s Feb 9 09:46:24.633598 kernel: raid6: neonx8 xor() 4639 MB/s Feb 9 09:46:24.651595 kernel: raid6: neonx4 gen() 6413 MB/s Feb 9 09:46:24.669596 kernel: raid6: neonx4 xor() 4809 MB/s Feb 9 09:46:24.687597 kernel: raid6: neonx2 gen() 5648 MB/s Feb 9 09:46:24.705597 kernel: raid6: neonx2 xor() 4471 MB/s Feb 9 09:46:24.723595 kernel: raid6: neonx1 gen() 4425 MB/s Feb 9 09:46:24.741601 kernel: raid6: neonx1 xor() 3618 MB/s Feb 9 09:46:24.759595 kernel: raid6: int64x8 gen() 3398 MB/s Feb 9 09:46:24.777595 kernel: raid6: int64x8 xor() 2074 MB/s Feb 9 09:46:24.795596 kernel: raid6: int64x4 gen() 3750 MB/s Feb 9 09:46:24.813596 kernel: raid6: int64x4 xor() 2176 MB/s Feb 9 09:46:24.831594 kernel: raid6: int64x2 gen() 3558 MB/s Feb 9 09:46:24.849597 kernel: raid6: int64x2 xor() 1932 MB/s Feb 9 09:46:24.867595 kernel: raid6: int64x1 gen() 2755 MB/s Feb 9 09:46:24.887088 kernel: raid6: int64x1 xor() 1439 MB/s Feb 9 09:46:24.887121 kernel: raid6: using algorithm neonx4 gen() 6413 MB/s Feb 9 09:46:24.887146 kernel: raid6: .... xor() 4809 MB/s, rmw enabled Feb 9 09:46:24.888926 kernel: raid6: using neon recovery algorithm Feb 9 09:46:24.907603 kernel: xor: measuring software checksum speed Feb 9 09:46:24.910597 kernel: 8regs : 9332 MB/sec Feb 9 09:46:24.910627 kernel: 32regs : 11107 MB/sec Feb 9 09:46:24.916668 kernel: arm64_neon : 9614 MB/sec Feb 9 09:46:24.916699 kernel: xor: using function: 32regs (11107 MB/sec) Feb 9 09:46:25.006623 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:46:25.024051 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:46:25.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:25.024000 audit: BPF prog-id=7 op=LOAD Feb 9 09:46:25.038168 kernel: audit: type=1130 audit(1707471985.024:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:25.038224 kernel: audit: type=1334 audit(1707471985.024:9): prog-id=7 op=LOAD Feb 9 09:46:25.035987 systemd[1]: Starting systemd-udevd.service... Feb 9 09:46:25.041602 kernel: audit: type=1334 audit(1707471985.024:10): prog-id=8 op=LOAD Feb 9 09:46:25.024000 audit: BPF prog-id=8 op=LOAD Feb 9 09:46:25.068055 systemd-udevd[508]: Using default interface naming scheme 'v252'. Feb 9 09:46:25.078027 systemd[1]: Started systemd-udevd.service. Feb 9 09:46:25.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:25.083212 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:46:25.115514 dracut-pre-trigger[517]: rd.md=0: removing MD RAID activation Feb 9 09:46:25.176305 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:46:25.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:25.180405 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:46:25.283716 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:46:25.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:25.391217 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 09:46:25.391275 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 9 09:46:25.399036 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 09:46:25.399318 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 09:46:25.409600 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:c0:e0:e6:1f:15 Feb 9 09:46:25.421496 (udev-worker)[568]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:46:25.441613 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 9 09:46:25.444122 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 09:46:25.452938 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 09:46:25.458326 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 09:46:25.458361 kernel: GPT:9289727 != 16777215 Feb 9 09:46:25.458385 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 09:46:25.460542 kernel: GPT:9289727 != 16777215 Feb 9 09:46:25.461863 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 09:46:25.465214 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:46:25.536616 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (566) Feb 9 09:46:25.568979 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:46:25.630490 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:46:25.663388 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:46:25.671302 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:46:25.685190 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:46:25.698546 systemd[1]: Starting disk-uuid.service... Feb 9 09:46:25.720619 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:46:25.721014 disk-uuid[670]: Primary Header is updated. Feb 9 09:46:25.721014 disk-uuid[670]: Secondary Entries is updated. Feb 9 09:46:25.721014 disk-uuid[670]: Secondary Header is updated. Feb 9 09:46:26.740360 disk-uuid[671]: The operation has completed successfully. Feb 9 09:46:26.742625 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:46:26.900401 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:46:26.901017 systemd[1]: Finished disk-uuid.service. Feb 9 09:46:26.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:26.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:26.923626 systemd[1]: Starting verity-setup.service... Feb 9 09:46:26.958611 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:46:27.039791 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:46:27.044935 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:46:27.048461 systemd[1]: Finished verity-setup.service. Feb 9 09:46:27.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:27.130598 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:46:27.132112 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:46:27.135506 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:46:27.141900 systemd[1]: Starting ignition-setup.service... Feb 9 09:46:27.146489 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:46:27.173754 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:46:27.173833 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 09:46:27.173858 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 09:46:27.189341 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 09:46:27.203149 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:46:27.233758 systemd[1]: Finished ignition-setup.service. Feb 9 09:46:27.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:27.237849 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:46:27.301536 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:46:27.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:27.317000 audit: BPF prog-id=9 op=LOAD Feb 9 09:46:27.320103 systemd[1]: Starting systemd-networkd.service... Feb 9 09:46:27.365029 systemd-networkd[1184]: lo: Link UP Feb 9 09:46:27.365051 systemd-networkd[1184]: lo: Gained carrier Feb 9 09:46:27.368960 systemd-networkd[1184]: Enumeration completed Feb 9 09:46:27.369437 systemd-networkd[1184]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:46:27.370039 systemd[1]: Started systemd-networkd.service. Feb 9 09:46:27.376849 systemd[1]: Reached target network.target. Feb 9 09:46:27.379868 systemd[1]: Starting iscsiuio.service... Feb 9 09:46:27.390328 systemd[1]: Started iscsiuio.service. Feb 9 09:46:27.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:27.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:27.395364 systemd-networkd[1184]: eth0: Link UP Feb 9 09:46:27.395385 systemd-networkd[1184]: eth0: Gained carrier Feb 9 09:46:27.399086 systemd[1]: Starting iscsid.service... Feb 9 09:46:27.408119 iscsid[1189]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:46:27.411134 iscsid[1189]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:46:27.411134 iscsid[1189]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:46:27.411796 systemd-networkd[1184]: eth0: DHCPv4 address 172.31.30.62/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 09:46:27.427679 iscsid[1189]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:46:27.427679 iscsid[1189]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:46:27.427679 iscsid[1189]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:46:27.436724 systemd[1]: Started iscsid.service. Feb 9 09:46:27.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:27.440753 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:46:27.462407 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:46:27.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:27.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:27.464447 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:46:27.466148 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:46:27.467910 systemd[1]: Reached target remote-fs.target. Feb 9 09:46:27.470859 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:46:27.487558 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:46:27.828765 ignition[1132]: Ignition 2.14.0 Feb 9 09:46:27.829270 ignition[1132]: Stage: fetch-offline Feb 9 09:46:27.829603 ignition[1132]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:46:27.829672 ignition[1132]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:46:27.847723 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:46:27.848867 ignition[1132]: Ignition finished successfully Feb 9 09:46:27.853503 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:46:27.866743 kernel: kauditd_printk_skb: 14 callbacks suppressed Feb 9 09:46:27.866779 kernel: audit: type=1130 audit(1707471987.854:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:27.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:27.867888 systemd[1]: Starting ignition-fetch.service... Feb 9 09:46:27.882852 ignition[1208]: Ignition 2.14.0 Feb 9 09:46:27.882880 ignition[1208]: Stage: fetch Feb 9 09:46:27.883198 ignition[1208]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:46:27.883257 ignition[1208]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:46:27.898430 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:46:27.901064 ignition[1208]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:46:27.909669 ignition[1208]: INFO : PUT result: OK Feb 9 09:46:27.913382 ignition[1208]: DEBUG : parsed url from cmdline: "" Feb 9 09:46:27.913382 ignition[1208]: INFO : no config URL provided Feb 9 09:46:27.913382 ignition[1208]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:46:27.919030 ignition[1208]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 09:46:27.919030 ignition[1208]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:46:27.923842 ignition[1208]: INFO : PUT result: OK Feb 9 09:46:27.923842 ignition[1208]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 09:46:27.927920 ignition[1208]: INFO : GET result: OK Feb 9 09:46:27.929504 ignition[1208]: DEBUG : parsing config with SHA512: 9b2648bc6e25320a6c54ef4bd2af43db4e87c56094aaacc4757b4e16e1d037a46f04024958737e6ca564c3fba97ad3ed553b2f4f8e7cebc654a9edd3f4c811a3 Feb 9 09:46:27.996362 unknown[1208]: fetched base config from "system" Feb 9 09:46:27.996652 unknown[1208]: fetched base config from "system" Feb 9 09:46:27.999650 ignition[1208]: fetch: fetch complete Feb 9 09:46:28.014366 kernel: audit: type=1130 audit(1707471988.002:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:28.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:27.996668 unknown[1208]: fetched user config from "aws" Feb 9 09:46:27.999665 ignition[1208]: fetch: fetch passed Feb 9 09:46:28.002864 systemd[1]: Finished ignition-fetch.service. Feb 9 09:46:27.999890 ignition[1208]: Ignition finished successfully Feb 9 09:46:28.019103 systemd[1]: Starting ignition-kargs.service... Feb 9 09:46:28.039051 ignition[1214]: Ignition 2.14.0 Feb 9 09:46:28.039078 ignition[1214]: Stage: kargs Feb 9 09:46:28.039366 ignition[1214]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:46:28.039420 ignition[1214]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:46:28.053764 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:46:28.056017 ignition[1214]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:46:28.059168 ignition[1214]: INFO : PUT result: OK Feb 9 09:46:28.064037 ignition[1214]: kargs: kargs passed Feb 9 09:46:28.064137 ignition[1214]: Ignition finished successfully Feb 9 09:46:28.067277 systemd[1]: Finished ignition-kargs.service. Feb 9 09:46:28.084952 kernel: audit: type=1130 audit(1707471988.067:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:28.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:28.070292 systemd[1]: Starting ignition-disks.service... Feb 9 09:46:28.092482 ignition[1220]: Ignition 2.14.0 Feb 9 09:46:28.092507 ignition[1220]: Stage: disks Feb 9 09:46:28.092832 ignition[1220]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:46:28.092889 ignition[1220]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:46:28.105845 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:46:28.108527 ignition[1220]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:46:28.112137 ignition[1220]: INFO : PUT result: OK Feb 9 09:46:28.116377 ignition[1220]: disks: disks passed Feb 9 09:46:28.116465 ignition[1220]: Ignition finished successfully Feb 9 09:46:28.120490 systemd[1]: Finished ignition-disks.service. Feb 9 09:46:28.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:28.123710 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:46:28.145746 kernel: audit: type=1130 audit(1707471988.122:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:28.132629 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:46:28.134221 systemd[1]: Reached target local-fs.target. Feb 9 09:46:28.135771 systemd[1]: Reached target sysinit.target. Feb 9 09:46:28.137273 systemd[1]: Reached target basic.target. Feb 9 09:46:28.140555 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:46:28.187228 systemd-fsck[1228]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 09:46:28.194532 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:46:28.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:28.198915 systemd[1]: Mounting sysroot.mount... Feb 9 09:46:28.208617 kernel: audit: type=1130 audit(1707471988.196:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:28.223616 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:46:28.225912 systemd[1]: Mounted sysroot.mount. Feb 9 09:46:28.226366 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:46:28.236380 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:46:28.238693 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 09:46:28.238773 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:46:28.238825 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:46:28.254292 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:46:28.268866 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:46:28.271916 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:46:28.293942 initrd-setup-root[1250]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:46:28.296641 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1245) Feb 9 09:46:28.303499 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:46:28.303562 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 09:46:28.303605 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 09:46:28.312604 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 09:46:28.315989 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:46:28.322048 initrd-setup-root[1276]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:46:28.331035 initrd-setup-root[1284]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:46:28.339952 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:46:28.525694 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:46:28.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:28.530107 systemd[1]: Starting ignition-mount.service... Feb 9 09:46:28.540301 kernel: audit: type=1130 audit(1707471988.527:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:28.540370 systemd[1]: Starting sysroot-boot.service... Feb 9 09:46:28.549979 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 09:46:28.552103 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 09:46:28.570856 systemd-networkd[1184]: eth0: Gained IPv6LL Feb 9 09:46:28.579881 ignition[1311]: INFO : Ignition 2.14.0 Feb 9 09:46:28.579881 ignition[1311]: INFO : Stage: mount Feb 9 09:46:28.583181 ignition[1311]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:46:28.583181 ignition[1311]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:46:28.603448 systemd[1]: Finished sysroot-boot.service. Feb 9 09:46:28.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:28.608700 ignition[1311]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:46:28.608700 ignition[1311]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:46:28.618309 kernel: audit: type=1130 audit(1707471988.605:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:28.618584 ignition[1311]: INFO : PUT result: OK Feb 9 09:46:28.624406 ignition[1311]: INFO : mount: mount passed Feb 9 09:46:28.626010 ignition[1311]: INFO : Ignition finished successfully Feb 9 09:46:28.628480 systemd[1]: Finished ignition-mount.service. Feb 9 09:46:28.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:28.632845 systemd[1]: Starting ignition-files.service... Feb 9 09:46:28.641934 kernel: audit: type=1130 audit(1707471988.630:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:28.649125 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:46:28.666620 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1320) Feb 9 09:46:28.670603 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:46:28.670649 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 09:46:28.674796 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 09:46:28.681600 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 09:46:28.686037 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:46:28.704817 ignition[1339]: INFO : Ignition 2.14.0 Feb 9 09:46:28.708421 ignition[1339]: INFO : Stage: files Feb 9 09:46:28.708421 ignition[1339]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:46:28.708421 ignition[1339]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:46:28.727292 ignition[1339]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:46:28.729747 ignition[1339]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:46:28.732865 ignition[1339]: INFO : PUT result: OK Feb 9 09:46:28.738453 ignition[1339]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:46:28.742337 ignition[1339]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:46:28.742337 ignition[1339]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:46:28.772052 ignition[1339]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:46:28.774918 ignition[1339]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:46:28.778836 unknown[1339]: wrote ssh authorized keys file for user: core Feb 9 09:46:28.781126 ignition[1339]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:46:28.784638 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:46:28.788030 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:46:28.791339 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:46:28.791339 ignition[1339]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 09:46:28.852132 ignition[1339]: INFO : GET result: OK Feb 9 09:46:28.967716 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:46:28.971926 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:46:28.971926 ignition[1339]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:46:36.538318 ignition[1339]: INFO : GET result: OK Feb 9 09:46:36.837660 ignition[1339]: DEBUG : file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 09:46:36.842560 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:46:36.842560 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:46:36.842560 ignition[1339]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 09:46:37.273093 ignition[1339]: INFO : GET result: OK Feb 9 09:46:37.711951 ignition[1339]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 09:46:37.716930 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:46:37.716930 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:46:37.716930 ignition[1339]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:46:37.834720 ignition[1339]: INFO : GET result: OK Feb 9 09:46:39.273431 ignition[1339]: DEBUG : file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 09:46:39.278438 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:46:39.281705 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 09:46:39.285166 ignition[1339]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:46:39.295546 ignition[1339]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1043457117" Feb 9 09:46:39.302132 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1339) Feb 9 09:46:39.302167 ignition[1339]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1043457117": device or resource busy Feb 9 09:46:39.302167 ignition[1339]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1043457117", trying btrfs: device or resource busy Feb 9 09:46:39.302167 ignition[1339]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1043457117" Feb 9 09:46:39.315247 ignition[1339]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1043457117" Feb 9 09:46:39.323221 ignition[1339]: INFO : op(3): [started] unmounting "/mnt/oem1043457117" Feb 9 09:46:39.326897 ignition[1339]: INFO : op(3): [finished] unmounting "/mnt/oem1043457117" Feb 9 09:46:39.329121 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 09:46:39.329121 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:46:39.340335 ignition[1339]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:46:39.332851 systemd[1]: mnt-oem1043457117.mount: Deactivated successfully. Feb 9 09:46:39.387313 ignition[1339]: INFO : GET result: OK Feb 9 09:46:39.977797 ignition[1339]: DEBUG : file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 09:46:39.982346 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:46:39.982346 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:46:39.982346 ignition[1339]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 09:46:40.039211 ignition[1339]: INFO : GET result: OK Feb 9 09:46:40.579757 ignition[1339]: DEBUG : file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 09:46:40.584685 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:46:40.587964 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:46:40.591603 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:46:40.595085 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:46:40.598707 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:46:40.602073 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:46:40.605647 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:46:40.608991 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:46:40.612424 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:46:40.615867 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:46:40.619354 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:46:40.627137 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:46:40.630704 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:46:40.635073 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:46:40.638685 ignition[1339]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:46:40.649982 ignition[1339]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1724325838" Feb 9 09:46:40.649982 ignition[1339]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1724325838": device or resource busy Feb 9 09:46:40.649982 ignition[1339]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1724325838", trying btrfs: device or resource busy Feb 9 09:46:40.649982 ignition[1339]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1724325838" Feb 9 09:46:40.668532 ignition[1339]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1724325838" Feb 9 09:46:40.668532 ignition[1339]: INFO : op(6): [started] unmounting "/mnt/oem1724325838" Feb 9 09:46:40.668532 ignition[1339]: INFO : op(6): [finished] unmounting "/mnt/oem1724325838" Feb 9 09:46:40.668532 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:46:40.668532 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 09:46:40.668532 ignition[1339]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:46:40.692448 ignition[1339]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2148247798" Feb 9 09:46:40.692448 ignition[1339]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2148247798": device or resource busy Feb 9 09:46:40.692448 ignition[1339]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2148247798", trying btrfs: device or resource busy Feb 9 09:46:40.692448 ignition[1339]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2148247798" Feb 9 09:46:40.705907 ignition[1339]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2148247798" Feb 9 09:46:40.705907 ignition[1339]: INFO : op(9): [started] unmounting "/mnt/oem2148247798" Feb 9 09:46:40.705907 ignition[1339]: INFO : op(9): [finished] unmounting "/mnt/oem2148247798" Feb 9 09:46:40.705907 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 09:46:40.705907 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 09:46:40.705907 ignition[1339]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:46:40.729338 systemd[1]: mnt-oem2148247798.mount: Deactivated successfully. Feb 9 09:46:40.751030 ignition[1339]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2987999379" Feb 9 09:46:40.755439 ignition[1339]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2987999379": device or resource busy Feb 9 09:46:40.755439 ignition[1339]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2987999379", trying btrfs: device or resource busy Feb 9 09:46:40.755439 ignition[1339]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2987999379" Feb 9 09:46:40.766677 ignition[1339]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2987999379" Feb 9 09:46:40.766677 ignition[1339]: INFO : op(c): [started] unmounting "/mnt/oem2987999379" Feb 9 09:46:40.766677 ignition[1339]: INFO : op(c): [finished] unmounting "/mnt/oem2987999379" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(14): [started] processing unit "nvidia.service" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(14): [finished] processing unit "nvidia.service" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(15): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(15): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(16): [started] processing unit "amazon-ssm-agent.service" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(16): op(17): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(16): op(17): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(16): [finished] processing unit "amazon-ssm-agent.service" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(1a): [started] processing unit "containerd.service" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(1a): op(1b): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(1a): op(1b): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:46:40.766677 ignition[1339]: INFO : files: op(1a): [finished] processing unit "containerd.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(1c): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(1c): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(1e): [started] processing unit "prepare-critools.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(1e): [finished] processing unit "prepare-critools.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(20): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(20): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(21): [started] setting preset to enabled for "prepare-helm.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(22): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(23): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(24): [started] setting preset to enabled for "nvidia.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(24): [finished] setting preset to enabled for "nvidia.service" Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(25): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 09:46:40.825700 ignition[1339]: INFO : files: op(25): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 09:46:40.890650 ignition[1339]: INFO : files: createResultFile: createFiles: op(26): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:46:40.890650 ignition[1339]: INFO : files: createResultFile: createFiles: op(26): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:46:40.890650 ignition[1339]: INFO : files: files passed Feb 9 09:46:40.890650 ignition[1339]: INFO : Ignition finished successfully Feb 9 09:46:40.914294 kernel: audit: type=1130 audit(1707472000.898:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.897886 systemd[1]: Finished ignition-files.service. Feb 9 09:46:40.910426 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:46:40.922693 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:46:40.924308 systemd[1]: Starting ignition-quench.service... Feb 9 09:46:40.932067 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:46:40.934632 systemd[1]: Finished ignition-quench.service. Feb 9 09:46:40.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.955241 kernel: audit: type=1130 audit(1707472000.937:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.955295 kernel: audit: type=1131 audit(1707472000.937:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.956878 initrd-setup-root-after-ignition[1364]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:46:40.961688 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:46:40.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.965664 systemd[1]: Reached target ignition-complete.target. Feb 9 09:46:40.976710 kernel: audit: type=1130 audit(1707472000.963:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:40.978147 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:46:41.005297 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:46:41.007490 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:46:41.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.011174 systemd[1]: Reached target initrd-fs.target. Feb 9 09:46:41.029154 kernel: audit: type=1130 audit(1707472001.009:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.029201 kernel: audit: type=1131 audit(1707472001.009:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.029146 systemd[1]: Reached target initrd.target. Feb 9 09:46:41.032249 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:46:41.036416 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:46:41.059640 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:46:41.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.064205 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:46:41.074172 kernel: audit: type=1130 audit(1707472001.061:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.085459 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:46:41.089797 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:46:41.093487 systemd[1]: Stopped target timers.target. Feb 9 09:46:41.096510 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:46:41.098642 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:46:41.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.101912 systemd[1]: Stopped target initrd.target. Feb 9 09:46:41.125241 kernel: audit: type=1131 audit(1707472001.100:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.142057 kernel: audit: type=1131 audit(1707472001.129:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.110788 systemd[1]: Stopped target basic.target. Feb 9 09:46:41.112471 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:46:41.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.156227 kernel: audit: type=1131 audit(1707472001.144:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.114400 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:46:41.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.116283 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:46:41.118220 systemd[1]: Stopped target remote-fs.target. Feb 9 09:46:41.120006 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:46:41.121949 systemd[1]: Stopped target sysinit.target. Feb 9 09:46:41.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.123840 systemd[1]: Stopped target local-fs.target. Feb 9 09:46:41.127118 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:46:41.129010 systemd[1]: Stopped target swap.target. Feb 9 09:46:41.130647 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:46:41.130959 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:46:41.140509 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:46:41.142199 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:46:41.143496 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:46:41.145999 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:46:41.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.146206 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:46:41.156758 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:46:41.157066 systemd[1]: Stopped ignition-files.service. Feb 9 09:46:41.179340 systemd[1]: Stopping ignition-mount.service... Feb 9 09:46:41.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.181098 systemd[1]: Stopping iscsiuio.service... Feb 9 09:46:41.184084 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:46:41.185701 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:46:41.186021 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:46:41.188046 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:46:41.188290 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:46:41.200894 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:46:41.201931 systemd[1]: Stopped iscsiuio.service. Feb 9 09:46:41.225030 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:46:41.226956 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:46:41.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.232893 ignition[1377]: INFO : Ignition 2.14.0 Feb 9 09:46:41.232893 ignition[1377]: INFO : Stage: umount Feb 9 09:46:41.236619 ignition[1377]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:46:41.236619 ignition[1377]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:46:41.245604 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:46:41.258790 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:46:41.260672 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:46:41.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.264074 ignition[1377]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:46:41.266795 ignition[1377]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:46:41.269920 ignition[1377]: INFO : PUT result: OK Feb 9 09:46:41.275538 ignition[1377]: INFO : umount: umount passed Feb 9 09:46:41.277751 ignition[1377]: INFO : Ignition finished successfully Feb 9 09:46:41.278918 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:46:41.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.279460 systemd[1]: Stopped ignition-mount.service. Feb 9 09:46:41.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.282008 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:46:41.282098 systemd[1]: Stopped ignition-disks.service. Feb 9 09:46:41.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.284843 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:46:41.285012 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:46:41.287820 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 09:46:41.287901 systemd[1]: Stopped ignition-fetch.service. Feb 9 09:46:41.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.290254 systemd[1]: Stopped target network.target. Feb 9 09:46:41.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.293992 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:46:41.294084 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:46:41.294478 systemd[1]: Stopped target paths.target. Feb 9 09:46:41.295423 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:46:41.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.301631 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:46:41.304580 systemd[1]: Stopped target slices.target. Feb 9 09:46:41.306081 systemd[1]: Stopped target sockets.target. Feb 9 09:46:41.307704 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:46:41.307759 systemd[1]: Closed iscsid.socket. Feb 9 09:46:41.310892 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:46:41.310979 systemd[1]: Closed iscsiuio.socket. Feb 9 09:46:41.312407 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:46:41.312492 systemd[1]: Stopped ignition-setup.service. Feb 9 09:46:41.314147 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:46:41.314225 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:46:41.317683 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:46:41.319357 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:46:41.323634 systemd-networkd[1184]: eth0: DHCPv6 lease lost Feb 9 09:46:41.354000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:46:41.325374 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:46:41.325656 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:46:41.355884 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:46:41.362832 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:46:41.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.366443 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:46:41.368000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:46:41.366545 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:46:41.372554 systemd[1]: Stopping network-cleanup.service... Feb 9 09:46:41.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.376089 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:46:41.376216 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:46:41.378140 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:46:41.378225 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:46:41.388710 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:46:41.388816 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:46:41.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.394346 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:46:41.399690 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:46:41.400195 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:46:41.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.405433 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:46:41.405528 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:46:41.412467 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:46:41.412640 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:46:41.417724 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:46:41.417827 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:46:41.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.422860 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:46:41.422956 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:46:41.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.427873 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:46:41.427959 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:46:41.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.434313 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:46:41.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.450677 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 09:46:41.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:41.450817 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 09:46:41.454648 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:46:41.454754 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:46:41.458765 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:46:41.458855 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:46:41.489000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:46:41.489000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:46:41.489000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:46:41.461230 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:46:41.461417 systemd[1]: Stopped network-cleanup.service. Feb 9 09:46:41.493000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:46:41.493000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:46:41.463449 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:46:41.463637 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:46:41.465734 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:46:41.468762 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:46:41.486465 systemd[1]: Switching root. Feb 9 09:46:41.521751 iscsid[1189]: iscsid shutting down. Feb 9 09:46:41.523374 systemd-journald[309]: Received SIGTERM from PID 1 (n/a). Feb 9 09:46:41.523458 systemd-journald[309]: Journal stopped Feb 9 09:46:46.409892 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:46:46.410000 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:46:46.410034 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:46:46.410067 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:46:46.410103 kernel: SELinux: policy capability open_perms=1 Feb 9 09:46:46.410139 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:46:46.410170 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:46:46.410201 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:46:46.410232 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:46:46.410261 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:46:46.410291 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:46:46.410322 systemd[1]: Successfully loaded SELinux policy in 86.051ms. Feb 9 09:46:46.410380 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.408ms. Feb 9 09:46:46.410417 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:46:46.410453 systemd[1]: Detected virtualization amazon. Feb 9 09:46:46.410483 systemd[1]: Detected architecture arm64. Feb 9 09:46:46.410515 systemd[1]: Detected first boot. Feb 9 09:46:46.410548 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:46:46.410599 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:46:46.410632 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:46:46.410667 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:46:46.410709 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:46:46.410745 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:46:46.410785 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:46:46.410818 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:46:46.410850 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:46:46.410882 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 09:46:46.410912 systemd[1]: Created slice system-getty.slice. Feb 9 09:46:46.410960 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:46:46.410999 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:46:46.411032 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:46:46.411063 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:46:46.411095 systemd[1]: Created slice user.slice. Feb 9 09:46:46.411126 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:46:46.411157 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:46:46.411191 systemd[1]: Set up automount boot.automount. Feb 9 09:46:46.411220 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:46:46.411249 systemd[1]: Reached target integritysetup.target. Feb 9 09:46:46.411284 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:46:46.411317 systemd[1]: Reached target remote-fs.target. Feb 9 09:46:46.411347 systemd[1]: Reached target slices.target. Feb 9 09:46:46.411378 systemd[1]: Reached target swap.target. Feb 9 09:46:46.411407 systemd[1]: Reached target torcx.target. Feb 9 09:46:46.411440 systemd[1]: Reached target veritysetup.target. Feb 9 09:46:46.411478 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:46:46.411509 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:46:46.411544 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:46:46.411590 kernel: kauditd_printk_skb: 48 callbacks suppressed Feb 9 09:46:46.411628 kernel: audit: type=1400 audit(1707472006.078:84): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:46:46.411659 kernel: audit: type=1335 audit(1707472006.078:85): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 09:46:46.411688 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:46:46.411717 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:46:46.411749 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:46:46.411779 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:46:46.411810 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:46:46.411844 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:46:46.411877 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:46:46.411906 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:46:46.411935 systemd[1]: Mounting media.mount... Feb 9 09:46:46.411966 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:46:46.411996 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:46:46.412025 systemd[1]: Mounting tmp.mount... Feb 9 09:46:46.412056 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:46:46.412085 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:46:46.412120 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:46:46.412149 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:46:46.412178 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:46:46.412207 systemd[1]: Starting modprobe@drm.service... Feb 9 09:46:46.412236 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:46:46.412265 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:46:46.412296 systemd[1]: Starting modprobe@loop.service... Feb 9 09:46:46.412326 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:46:46.412356 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 09:46:46.412391 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 09:46:46.412422 kernel: fuse: init (API version 7.34) Feb 9 09:46:46.412453 systemd[1]: Starting systemd-journald.service... Feb 9 09:46:46.412482 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:46:46.412516 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:46:46.412546 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:46:46.413895 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:46:46.413948 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:46:46.413980 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:46:46.414016 systemd[1]: Mounted media.mount. Feb 9 09:46:46.414047 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:46:46.414076 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:46:46.414106 systemd[1]: Mounted tmp.mount. Feb 9 09:46:46.414134 kernel: loop: module loaded Feb 9 09:46:46.414172 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:46:46.414202 kernel: audit: type=1130 audit(1707472006.375:86): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.414232 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:46:46.414261 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:46:46.414294 kernel: audit: type=1130 audit(1707472006.396:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.414325 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:46:46.414354 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:46:46.414386 kernel: audit: type=1131 audit(1707472006.405:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.414420 systemd-journald[1528]: Journal started Feb 9 09:46:46.414523 systemd-journald[1528]: Runtime Journal (/run/log/journal/ec22fad25a7caef9442eed03e8a6b8a0) is 8.0M, max 75.4M, 67.4M free. Feb 9 09:46:46.078000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:46:46.078000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 09:46:46.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.446698 kernel: audit: type=1305 audit(1707472006.405:89): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:46:46.446784 systemd[1]: Started systemd-journald.service. Feb 9 09:46:46.446830 kernel: audit: type=1300 audit(1707472006.405:89): arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe3c4f640 a2=4000 a3=1 items=0 ppid=1 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:46:46.446870 kernel: audit: type=1327 audit(1707472006.405:89): proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:46:46.405000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:46:46.405000 audit[1528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe3c4f640 a2=4000 a3=1 items=0 ppid=1 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:46:46.405000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:46:46.452093 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:46:46.452470 systemd[1]: Finished modprobe@drm.service. Feb 9 09:46:46.456392 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:46:46.456952 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:46:46.459388 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:46:46.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.459945 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:46:46.477459 kernel: audit: type=1130 audit(1707472006.429:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.477518 kernel: audit: type=1131 audit(1707472006.429:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.478087 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:46:46.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.485752 systemd[1]: Finished modprobe@loop.service. Feb 9 09:46:46.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.488716 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:46:46.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.491792 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:46:46.495113 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:46:46.498013 systemd[1]: Reached target network-pre.target. Feb 9 09:46:46.502255 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:46:46.507806 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:46:46.509432 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:46:46.516172 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:46:46.520397 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:46:46.522711 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:46:46.526541 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:46:46.538931 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:46:46.541551 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:46:46.551827 systemd-journald[1528]: Time spent on flushing to /var/log/journal/ec22fad25a7caef9442eed03e8a6b8a0 is 68.927ms for 1101 entries. Feb 9 09:46:46.551827 systemd-journald[1528]: System Journal (/var/log/journal/ec22fad25a7caef9442eed03e8a6b8a0) is 8.0M, max 195.6M, 187.6M free. Feb 9 09:46:46.721328 systemd-journald[1528]: Received client request to flush runtime journal. Feb 9 09:46:46.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.548300 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:46:46.555707 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:46:46.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.586751 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:46:46.588892 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:46:46.617967 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:46:46.622501 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:46:46.640263 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:46:46.701869 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:46:46.706263 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:46:46.723050 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:46:46.725930 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:46:46.731261 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:46:46.759193 udevadm[1586]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 09:46:46.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.791033 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:46:47.490790 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:46:47.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.495248 systemd[1]: Starting systemd-udevd.service... Feb 9 09:46:47.536695 systemd-udevd[1589]: Using default interface naming scheme 'v252'. Feb 9 09:46:47.578144 systemd[1]: Started systemd-udevd.service. Feb 9 09:46:47.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.583054 systemd[1]: Starting systemd-networkd.service... Feb 9 09:46:47.591593 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:46:47.668472 systemd[1]: Found device dev-ttyS0.device. Feb 9 09:46:47.692312 (udev-worker)[1599]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:46:47.708412 systemd[1]: Started systemd-userdbd.service. Feb 9 09:46:47.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.867073 systemd-networkd[1595]: lo: Link UP Feb 9 09:46:47.867093 systemd-networkd[1595]: lo: Gained carrier Feb 9 09:46:47.868106 systemd-networkd[1595]: Enumeration completed Feb 9 09:46:47.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.868325 systemd[1]: Started systemd-networkd.service. Feb 9 09:46:47.868326 systemd-networkd[1595]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:46:47.873186 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:46:47.884635 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:46:47.885524 systemd-networkd[1595]: eth0: Link UP Feb 9 09:46:47.885855 systemd-networkd[1595]: eth0: Gained carrier Feb 9 09:46:47.899868 systemd-networkd[1595]: eth0: DHCPv4 address 172.31.30.62/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 09:46:47.920667 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1608) Feb 9 09:46:48.068374 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 09:46:48.069374 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:46:48.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.083649 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:46:48.113514 lvm[1708]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:46:48.151364 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:46:48.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.153868 systemd[1]: Reached target cryptsetup.target. Feb 9 09:46:48.158276 systemd[1]: Starting lvm2-activation.service... Feb 9 09:46:48.167926 lvm[1710]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:46:48.205374 systemd[1]: Finished lvm2-activation.service. Feb 9 09:46:48.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.207365 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:46:48.209121 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:46:48.209175 systemd[1]: Reached target local-fs.target. Feb 9 09:46:48.211085 systemd[1]: Reached target machines.target. Feb 9 09:46:48.215375 systemd[1]: Starting ldconfig.service... Feb 9 09:46:48.218173 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:46:48.218329 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:46:48.221028 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:46:48.230327 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:46:48.236055 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:46:48.239050 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:46:48.239186 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:46:48.242282 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:46:48.260778 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1713 (bootctl) Feb 9 09:46:48.263175 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:46:48.279842 systemd-tmpfiles[1716]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:46:48.283556 systemd-tmpfiles[1716]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:46:48.286836 systemd-tmpfiles[1716]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:46:48.301291 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:46:48.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.391231 systemd-fsck[1722]: fsck.fat 4.2 (2021-01-31) Feb 9 09:46:48.391231 systemd-fsck[1722]: /dev/nvme0n1p1: 236 files, 113719/258078 clusters Feb 9 09:46:48.394530 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:46:48.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.399556 systemd[1]: Mounting boot.mount... Feb 9 09:46:48.439055 systemd[1]: Mounted boot.mount. Feb 9 09:46:48.467786 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:46:48.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.695045 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:46:48.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.699934 systemd[1]: Starting audit-rules.service... Feb 9 09:46:48.705222 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:46:48.715589 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:46:48.727120 systemd[1]: Starting systemd-resolved.service... Feb 9 09:46:48.739182 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:46:48.749365 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:46:48.757136 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:46:48.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.762009 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:46:48.802000 audit[1752]: SYSTEM_BOOT pid=1752 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.810128 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:46:48.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.817022 ldconfig[1712]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:46:48.839365 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:46:48.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.851848 systemd[1]: Finished ldconfig.service. Feb 9 09:46:48.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.856354 systemd[1]: Starting systemd-update-done.service... Feb 9 09:46:48.874000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:46:48.874000 audit[1762]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd2b496a0 a2=420 a3=0 items=0 ppid=1740 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:46:48.874000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:46:48.877128 augenrules[1762]: No rules Feb 9 09:46:48.878849 systemd[1]: Finished audit-rules.service. Feb 9 09:46:48.891515 systemd[1]: Finished systemd-update-done.service. Feb 9 09:46:48.965694 systemd-resolved[1745]: Positive Trust Anchors: Feb 9 09:46:48.965756 systemd-resolved[1745]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:46:48.965808 systemd-resolved[1745]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:46:48.979834 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:46:48.981914 systemd[1]: Reached target time-set.target. Feb 9 09:46:49.015030 systemd-resolved[1745]: Defaulting to hostname 'linux'. Feb 9 09:46:49.018066 systemd[1]: Started systemd-resolved.service. Feb 9 09:46:49.019954 systemd[1]: Reached target network.target. Feb 9 09:46:49.021687 systemd[1]: Reached target nss-lookup.target. Feb 9 09:46:49.023410 systemd[1]: Reached target sysinit.target. Feb 9 09:46:49.025192 systemd[1]: Started motdgen.path. Feb 9 09:46:49.026662 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:46:49.029164 systemd[1]: Started logrotate.timer. Feb 9 09:46:49.030854 systemd[1]: Started mdadm.timer. Feb 9 09:46:49.032249 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:46:49.034046 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:46:49.034090 systemd[1]: Reached target paths.target. Feb 9 09:46:49.035642 systemd[1]: Reached target timers.target. Feb 9 09:46:49.037626 systemd[1]: Listening on dbus.socket. Feb 9 09:46:49.041297 systemd[1]: Starting docker.socket... Feb 9 09:46:49.045287 systemd[1]: Listening on sshd.socket. Feb 9 09:46:49.047363 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:46:49.048068 systemd[1]: Listening on docker.socket. Feb 9 09:46:49.049791 systemd[1]: Reached target sockets.target. Feb 9 09:46:49.051760 systemd[1]: Reached target basic.target. Feb 9 09:46:49.054149 systemd[1]: System is tainted: cgroupsv1 Feb 9 09:46:49.054228 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:46:49.054278 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:46:49.056300 systemd-timesyncd[1748]: Contacted time server 135.148.100.14:123 (0.flatcar.pool.ntp.org). Feb 9 09:46:49.056399 systemd-timesyncd[1748]: Initial clock synchronization to Fri 2024-02-09 09:46:49.367799 UTC. Feb 9 09:46:49.058178 systemd[1]: Starting containerd.service... Feb 9 09:46:49.062091 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 09:46:49.067884 systemd[1]: Starting dbus.service... Feb 9 09:46:49.072384 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:46:49.081123 systemd[1]: Starting extend-filesystems.service... Feb 9 09:46:49.082962 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:46:49.085742 systemd[1]: Starting motdgen.service... Feb 9 09:46:49.093176 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:46:49.099248 systemd[1]: Starting prepare-critools.service... Feb 9 09:46:49.104304 systemd[1]: Starting prepare-helm.service... Feb 9 09:46:49.112855 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:46:49.125368 jq[1779]: false Feb 9 09:46:49.127587 systemd[1]: Starting sshd-keygen.service... Feb 9 09:46:49.133947 systemd[1]: Starting systemd-logind.service... Feb 9 09:46:49.135811 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:46:49.136020 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:46:49.139558 systemd[1]: Starting update-engine.service... Feb 9 09:46:49.202216 jq[1795]: true Feb 9 09:46:49.145080 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:46:49.154052 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:46:49.154628 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:46:49.200167 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:46:49.221413 tar[1798]: crictl Feb 9 09:46:49.221906 tar[1797]: ./ Feb 9 09:46:49.221906 tar[1797]: ./macvlan Feb 9 09:46:49.200792 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:46:49.238602 tar[1799]: linux-arm64/helm Feb 9 09:46:49.255125 jq[1813]: true Feb 9 09:46:49.292308 extend-filesystems[1780]: Found nvme0n1 Feb 9 09:46:49.292308 extend-filesystems[1780]: Found nvme0n1p1 Feb 9 09:46:49.292308 extend-filesystems[1780]: Found nvme0n1p2 Feb 9 09:46:49.292308 extend-filesystems[1780]: Found nvme0n1p3 Feb 9 09:46:49.292308 extend-filesystems[1780]: Found usr Feb 9 09:46:49.292308 extend-filesystems[1780]: Found nvme0n1p4 Feb 9 09:46:49.292308 extend-filesystems[1780]: Found nvme0n1p6 Feb 9 09:46:49.292308 extend-filesystems[1780]: Found nvme0n1p7 Feb 9 09:46:49.292308 extend-filesystems[1780]: Found nvme0n1p9 Feb 9 09:46:49.292308 extend-filesystems[1780]: Checking size of /dev/nvme0n1p9 Feb 9 09:46:49.342814 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:46:49.343378 systemd[1]: Finished motdgen.service. Feb 9 09:46:49.402624 env[1801]: time="2024-02-09T09:46:49.401400332Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:46:49.439483 extend-filesystems[1780]: Resized partition /dev/nvme0n1p9 Feb 9 09:46:49.469678 extend-filesystems[1847]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 09:46:49.478413 dbus-daemon[1778]: [system] SELinux support is enabled Feb 9 09:46:49.478732 systemd[1]: Started dbus.service. Feb 9 09:46:49.484055 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:46:49.484122 systemd[1]: Reached target system-config.target. Feb 9 09:46:49.486873 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:46:49.486925 systemd[1]: Reached target user-config.target. Feb 9 09:46:49.509091 dbus-daemon[1778]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1595 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 09:46:49.516108 systemd[1]: Starting systemd-hostnamed.service... Feb 9 09:46:49.522547 tar[1797]: ./static Feb 9 09:46:49.524284 bash[1842]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:46:49.524924 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:46:49.537597 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 09:46:49.563826 systemd-networkd[1595]: eth0: Gained IPv6LL Feb 9 09:46:49.568231 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:46:49.570558 systemd[1]: Reached target network-online.target. Feb 9 09:46:49.575432 systemd[1]: Started amazon-ssm-agent.service. Feb 9 09:46:49.582033 systemd[1]: Started nvidia.service. Feb 9 09:46:49.614655 env[1801]: time="2024-02-09T09:46:49.613828617Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:46:49.616256 env[1801]: time="2024-02-09T09:46:49.615837789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:46:49.626423 env[1801]: time="2024-02-09T09:46:49.624837213Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:46:49.626423 env[1801]: time="2024-02-09T09:46:49.624909573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:46:49.662865 update_engine[1794]: I0209 09:46:49.661688 1794 main.cc:92] Flatcar Update Engine starting Feb 9 09:46:49.664236 env[1801]: time="2024-02-09T09:46:49.663927021Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:46:49.664236 env[1801]: time="2024-02-09T09:46:49.663993237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:46:49.664236 env[1801]: time="2024-02-09T09:46:49.664029669Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:46:49.664236 env[1801]: time="2024-02-09T09:46:49.664057821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:46:49.664589 env[1801]: time="2024-02-09T09:46:49.664272381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:46:49.664829 env[1801]: time="2024-02-09T09:46:49.664774293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:46:49.669279 env[1801]: time="2024-02-09T09:46:49.669170409Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:46:49.669279 env[1801]: time="2024-02-09T09:46:49.669265869Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:46:49.669769 env[1801]: time="2024-02-09T09:46:49.669492393Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:46:49.669769 env[1801]: time="2024-02-09T09:46:49.669524505Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:46:49.672164 systemd[1]: Started update-engine.service. Feb 9 09:46:49.673275 update_engine[1794]: I0209 09:46:49.672241 1794 update_check_scheduler.cc:74] Next update check in 10m58s Feb 9 09:46:49.677173 systemd[1]: Started locksmithd.service. Feb 9 09:46:49.821455 env[1801]: time="2024-02-09T09:46:49.821302894Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:46:49.821455 env[1801]: time="2024-02-09T09:46:49.821398174Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:46:49.821770 env[1801]: time="2024-02-09T09:46:49.821467798Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:46:49.821770 env[1801]: time="2024-02-09T09:46:49.821558458Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.821900 env[1801]: time="2024-02-09T09:46:49.821746918Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.821900 env[1801]: time="2024-02-09T09:46:49.821807686Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.821900 env[1801]: time="2024-02-09T09:46:49.821865202Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.822870 env[1801]: time="2024-02-09T09:46:49.822787234Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.823032 env[1801]: time="2024-02-09T09:46:49.822875386Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.823032 env[1801]: time="2024-02-09T09:46:49.822956338Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.823032 env[1801]: time="2024-02-09T09:46:49.823016326Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.823214 env[1801]: time="2024-02-09T09:46:49.823049722Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:46:49.823434 env[1801]: time="2024-02-09T09:46:49.823387654Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:46:49.823800 env[1801]: time="2024-02-09T09:46:49.823737430Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:46:49.824948 env[1801]: time="2024-02-09T09:46:49.824867662Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:46:49.825076 env[1801]: time="2024-02-09T09:46:49.824993602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.825076 env[1801]: time="2024-02-09T09:46:49.825029218Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:46:49.825234 env[1801]: time="2024-02-09T09:46:49.825194398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.825416 env[1801]: time="2024-02-09T09:46:49.825372310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.825498 env[1801]: time="2024-02-09T09:46:49.825443674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.825752 env[1801]: time="2024-02-09T09:46:49.825477262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.825848 env[1801]: time="2024-02-09T09:46:49.825764494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.825913 env[1801]: time="2024-02-09T09:46:49.825858862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.825986 env[1801]: time="2024-02-09T09:46:49.825892486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.825986 env[1801]: time="2024-02-09T09:46:49.825947362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.826107 env[1801]: time="2024-02-09T09:46:49.825987142Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:46:49.826418 env[1801]: time="2024-02-09T09:46:49.826377334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.826503 env[1801]: time="2024-02-09T09:46:49.826422322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.826503 env[1801]: time="2024-02-09T09:46:49.826478314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.827727 env[1801]: time="2024-02-09T09:46:49.826508674Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:46:49.828603 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 09:46:49.845251 env[1801]: time="2024-02-09T09:46:49.840707854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:46:49.845251 env[1801]: time="2024-02-09T09:46:49.840792826Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:46:49.845251 env[1801]: time="2024-02-09T09:46:49.840861658Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:46:49.845251 env[1801]: time="2024-02-09T09:46:49.840963610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:46:49.848304 env[1801]: time="2024-02-09T09:46:49.842093482Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:46:49.852340 env[1801]: time="2024-02-09T09:46:49.850621966Z" level=info msg="Connect containerd service" Feb 9 09:46:49.855506 extend-filesystems[1847]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 09:46:49.855506 extend-filesystems[1847]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 09:46:49.855506 extend-filesystems[1847]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 09:46:49.885461 extend-filesystems[1780]: Resized filesystem in /dev/nvme0n1p9 Feb 9 09:46:49.861222 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:46:49.889159 env[1801]: time="2024-02-09T09:46:49.862153030Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:46:49.861814 systemd[1]: Finished extend-filesystems.service. Feb 9 09:46:49.891313 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:46:49.893034 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:46:49.898202 env[1801]: time="2024-02-09T09:46:49.898104323Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:46:49.898460 env[1801]: time="2024-02-09T09:46:49.898377659Z" level=info msg="Start subscribing containerd event" Feb 9 09:46:49.898554 env[1801]: time="2024-02-09T09:46:49.898511855Z" level=info msg="Start recovering state" Feb 9 09:46:49.898718 env[1801]: time="2024-02-09T09:46:49.898679447Z" level=info msg="Start event monitor" Feb 9 09:46:49.898791 env[1801]: time="2024-02-09T09:46:49.898733987Z" level=info msg="Start snapshots syncer" Feb 9 09:46:49.898791 env[1801]: time="2024-02-09T09:46:49.898780199Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:46:49.898916 env[1801]: time="2024-02-09T09:46:49.898801523Z" level=info msg="Start streaming server" Feb 9 09:46:49.899691 env[1801]: time="2024-02-09T09:46:49.899632319Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:46:49.899828 env[1801]: time="2024-02-09T09:46:49.899747999Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:46:49.899995 systemd[1]: Started containerd.service. Feb 9 09:46:49.901942 env[1801]: time="2024-02-09T09:46:49.901864367Z" level=info msg="containerd successfully booted in 0.511668s" Feb 9 09:46:49.952777 tar[1797]: ./vlan Feb 9 09:46:49.993854 systemd-logind[1793]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 09:46:50.011398 amazon-ssm-agent[1853]: 2024/02/09 09:46:50 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 09:46:50.013248 amazon-ssm-agent[1853]: Initializing new seelog logger Feb 9 09:46:50.013490 amazon-ssm-agent[1853]: New Seelog Logger Creation Complete Feb 9 09:46:50.013618 amazon-ssm-agent[1853]: 2024/02/09 09:46:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 09:46:50.013829 amazon-ssm-agent[1853]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 09:46:50.014329 amazon-ssm-agent[1853]: 2024/02/09 09:46:50 processing appconfig overrides Feb 9 09:46:50.028043 systemd-logind[1793]: New seat seat0. Feb 9 09:46:50.042831 systemd[1]: Started systemd-logind.service. Feb 9 09:46:50.242345 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 09:46:50.406360 tar[1797]: ./portmap Feb 9 09:46:50.467814 dbus-daemon[1778]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 09:46:50.468063 systemd[1]: Started systemd-hostnamed.service. Feb 9 09:46:50.485519 dbus-daemon[1778]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1849 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 09:46:50.490560 systemd[1]: Starting polkit.service... Feb 9 09:46:50.519238 coreos-metadata[1776]: Feb 09 09:46:50.517 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 09:46:50.525473 coreos-metadata[1776]: Feb 09 09:46:50.525 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 09:46:50.532227 coreos-metadata[1776]: Feb 09 09:46:50.532 INFO Fetch successful Feb 9 09:46:50.532557 coreos-metadata[1776]: Feb 09 09:46:50.532 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 09:46:50.533750 coreos-metadata[1776]: Feb 09 09:46:50.533 INFO Fetch successful Feb 9 09:46:50.537174 unknown[1776]: wrote ssh authorized keys file for user: core Feb 9 09:46:50.545163 polkitd[1962]: Started polkitd version 121 Feb 9 09:46:50.576075 update-ssh-keys[1968]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:46:50.576921 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 09:46:50.591346 polkitd[1962]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 09:46:50.591703 polkitd[1962]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 09:46:50.596736 polkitd[1962]: Finished loading, compiling and executing 2 rules Feb 9 09:46:50.598885 dbus-daemon[1778]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 09:46:50.599155 systemd[1]: Started polkit.service. Feb 9 09:46:50.600651 polkitd[1962]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 09:46:50.624482 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO Create new startup processor Feb 9 09:46:50.624991 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 09:46:50.625086 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO Initializing bookkeeping folders Feb 9 09:46:50.625086 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO removing the completed state files Feb 9 09:46:50.625086 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO Initializing bookkeeping folders for long running plugins Feb 9 09:46:50.625266 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 09:46:50.625266 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO Initializing healthcheck folders for long running plugins Feb 9 09:46:50.625266 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO Initializing locations for inventory plugin Feb 9 09:46:50.625266 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO Initializing default location for custom inventory Feb 9 09:46:50.625490 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO Initializing default location for file inventory Feb 9 09:46:50.625490 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO Initializing default location for role inventory Feb 9 09:46:50.625490 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO Init the cloudwatchlogs publisher Feb 9 09:46:50.625490 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [instanceID=i-079a7151175056455] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 09:46:50.625490 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [instanceID=i-079a7151175056455] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 09:46:50.625798 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [instanceID=i-079a7151175056455] Successfully loaded platform independent plugin aws:configurePackage Feb 9 09:46:50.625798 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [instanceID=i-079a7151175056455] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 09:46:50.625798 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [instanceID=i-079a7151175056455] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 09:46:50.625798 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [instanceID=i-079a7151175056455] Successfully loaded platform independent plugin aws:configureDocker Feb 9 09:46:50.625798 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [instanceID=i-079a7151175056455] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 09:46:50.625798 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [instanceID=i-079a7151175056455] Successfully loaded platform independent plugin aws:downloadContent Feb 9 09:46:50.625798 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [instanceID=i-079a7151175056455] Successfully loaded platform independent plugin aws:runDocument Feb 9 09:46:50.625798 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [instanceID=i-079a7151175056455] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 09:46:50.626232 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 09:46:50.626232 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO OS: linux, Arch: arm64 Feb 9 09:46:50.626908 amazon-ssm-agent[1853]: datastore file /var/lib/amazon/ssm/i-079a7151175056455/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 09:46:50.633640 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 09:46:50.660915 systemd-resolved[1745]: System hostname changed to 'ip-172-31-30-62'. Feb 9 09:46:50.660921 systemd-hostnamed[1849]: Hostname set to (transient) Feb 9 09:46:50.687755 tar[1797]: ./host-local Feb 9 09:46:50.743992 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 09:46:50.839284 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 09:46:50.867514 tar[1797]: ./vrf Feb 9 09:46:50.933764 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessagingDeliveryService] Starting message polling Feb 9 09:46:51.028251 tar[1797]: ./bridge Feb 9 09:46:51.028567 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 09:46:51.123586 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [instanceID=i-079a7151175056455] Starting association polling Feb 9 09:46:51.163688 tar[1797]: ./tuning Feb 9 09:46:51.218690 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 09:46:51.272139 tar[1797]: ./firewall Feb 9 09:46:51.314010 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 09:46:51.402787 tar[1797]: ./host-device Feb 9 09:46:51.409532 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 09:46:51.505265 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 09:46:51.522657 tar[1797]: ./sbr Feb 9 09:46:51.601167 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 09:46:51.632899 tar[1797]: ./loopback Feb 9 09:46:51.697377 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 09:46:51.727481 tar[1799]: linux-arm64/LICENSE Feb 9 09:46:51.728107 tar[1799]: linux-arm64/README.md Feb 9 09:46:51.748235 tar[1797]: ./dhcp Feb 9 09:46:51.749397 systemd[1]: Finished prepare-helm.service. Feb 9 09:46:51.797773 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 09:46:51.804421 systemd[1]: Finished prepare-critools.service. Feb 9 09:46:51.894267 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [OfflineService] Starting document processing engine... Feb 9 09:46:51.934585 tar[1797]: ./ptp Feb 9 09:46:51.991021 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [OfflineService] [EngineProcessor] Starting Feb 9 09:46:51.998584 tar[1797]: ./ipvlan Feb 9 09:46:52.060381 tar[1797]: ./bandwidth Feb 9 09:46:52.087774 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 09:46:52.151701 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:46:52.184823 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 09:46:52.217307 locksmithd[1863]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:46:52.282204 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 09:46:52.379695 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [StartupProcessor] Executing startup processor tasks Feb 9 09:46:52.477313 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 09:46:52.575271 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 09:46:52.673314 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 09:46:52.771594 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 09:46:52.870089 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 09:46:52.968728 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 09:46:53.067628 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-079a7151175056455, requestId: 1c00d60e-512f-4459-8020-abe689f23221 Feb 9 09:46:53.166771 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessageGatewayService] listening reply. Feb 9 09:46:53.266006 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [OfflineService] Starting message polling Feb 9 09:46:53.365395 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [OfflineService] Starting send replies to MDS Feb 9 09:46:53.465124 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-079a7151175056455?role=subscribe&stream=input Feb 9 09:46:53.564857 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-079a7151175056455?role=subscribe&stream=input Feb 9 09:46:53.648373 sshd_keygen[1807]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:46:53.664960 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 09:46:53.685438 systemd[1]: Finished sshd-keygen.service. Feb 9 09:46:53.691033 systemd[1]: Starting issuegen.service... Feb 9 09:46:53.703834 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:46:53.704399 systemd[1]: Finished issuegen.service. Feb 9 09:46:53.709296 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:46:53.725895 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:46:53.730957 systemd[1]: Started getty@tty1.service. Feb 9 09:46:53.736565 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 09:46:53.739419 systemd[1]: Reached target getty.target. Feb 9 09:46:53.741708 systemd[1]: Reached target multi-user.target. Feb 9 09:46:53.746984 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:46:53.762031 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:46:53.762569 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:46:53.765303 amazon-ssm-agent[1853]: 2024-02-09 09:46:50 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 09:46:53.769954 systemd[1]: Startup finished in 19.551s (kernel) + 11.759s (userspace) = 31.310s. Feb 9 09:46:58.892840 systemd[1]: Created slice system-sshd.slice. Feb 9 09:46:58.895168 systemd[1]: Started sshd@0-172.31.30.62:22-139.178.89.65:47814.service. Feb 9 09:46:59.078256 sshd[2024]: Accepted publickey for core from 139.178.89.65 port 47814 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:46:59.081005 sshd[2024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:59.097339 systemd[1]: Created slice user-500.slice. Feb 9 09:46:59.099508 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:46:59.108000 systemd-logind[1793]: New session 1 of user core. Feb 9 09:46:59.118634 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:46:59.123327 systemd[1]: Starting user@500.service... Feb 9 09:46:59.134518 (systemd)[2029]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:59.312638 systemd[2029]: Queued start job for default target default.target. Feb 9 09:46:59.313094 systemd[2029]: Reached target paths.target. Feb 9 09:46:59.313135 systemd[2029]: Reached target sockets.target. Feb 9 09:46:59.313168 systemd[2029]: Reached target timers.target. Feb 9 09:46:59.313199 systemd[2029]: Reached target basic.target. Feb 9 09:46:59.313301 systemd[2029]: Reached target default.target. Feb 9 09:46:59.313370 systemd[2029]: Startup finished in 167ms. Feb 9 09:46:59.313956 systemd[1]: Started user@500.service. Feb 9 09:46:59.316016 systemd[1]: Started session-1.scope. Feb 9 09:46:59.463268 systemd[1]: Started sshd@1-172.31.30.62:22-139.178.89.65:47826.service. Feb 9 09:46:59.635953 sshd[2038]: Accepted publickey for core from 139.178.89.65 port 47826 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:46:59.638412 sshd[2038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:59.646669 systemd-logind[1793]: New session 2 of user core. Feb 9 09:46:59.647718 systemd[1]: Started session-2.scope. Feb 9 09:46:59.781433 sshd[2038]: pam_unix(sshd:session): session closed for user core Feb 9 09:46:59.786603 systemd-logind[1793]: Session 2 logged out. Waiting for processes to exit. Feb 9 09:46:59.787152 systemd[1]: sshd@1-172.31.30.62:22-139.178.89.65:47826.service: Deactivated successfully. Feb 9 09:46:59.788985 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 09:46:59.789957 systemd-logind[1793]: Removed session 2. Feb 9 09:46:59.806864 systemd[1]: Started sshd@2-172.31.30.62:22-139.178.89.65:47836.service. Feb 9 09:46:59.977558 sshd[2045]: Accepted publickey for core from 139.178.89.65 port 47836 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:46:59.980548 sshd[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:59.988795 systemd-logind[1793]: New session 3 of user core. Feb 9 09:46:59.989141 systemd[1]: Started session-3.scope. Feb 9 09:47:00.112468 sshd[2045]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:00.117090 systemd[1]: sshd@2-172.31.30.62:22-139.178.89.65:47836.service: Deactivated successfully. Feb 9 09:47:00.118461 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 09:47:00.121024 systemd-logind[1793]: Session 3 logged out. Waiting for processes to exit. Feb 9 09:47:00.123019 systemd-logind[1793]: Removed session 3. Feb 9 09:47:00.137611 systemd[1]: Started sshd@3-172.31.30.62:22-139.178.89.65:47846.service. Feb 9 09:47:00.307052 sshd[2052]: Accepted publickey for core from 139.178.89.65 port 47846 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:00.309643 sshd[2052]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:00.318094 systemd-logind[1793]: New session 4 of user core. Feb 9 09:47:00.319153 systemd[1]: Started session-4.scope. Feb 9 09:47:00.449406 sshd[2052]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:00.454645 systemd[1]: sshd@3-172.31.30.62:22-139.178.89.65:47846.service: Deactivated successfully. Feb 9 09:47:00.456083 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:47:00.458569 systemd-logind[1793]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:47:00.460970 systemd-logind[1793]: Removed session 4. Feb 9 09:47:00.476293 systemd[1]: Started sshd@4-172.31.30.62:22-139.178.89.65:47854.service. Feb 9 09:47:00.652252 sshd[2059]: Accepted publickey for core from 139.178.89.65 port 47854 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:00.654083 sshd[2059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:00.662103 systemd-logind[1793]: New session 5 of user core. Feb 9 09:47:00.663043 systemd[1]: Started session-5.scope. Feb 9 09:47:00.783365 sudo[2063]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 09:47:00.784405 sudo[2063]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:47:00.800824 dbus-daemon[1778]: avc: received setenforce notice (enforcing=1) Feb 9 09:47:00.804328 sudo[2063]: pam_unix(sudo:session): session closed for user root Feb 9 09:47:00.829992 sshd[2059]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:00.834984 systemd[1]: sshd@4-172.31.30.62:22-139.178.89.65:47854.service: Deactivated successfully. Feb 9 09:47:00.836969 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:47:00.837006 systemd-logind[1793]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:47:00.839918 systemd-logind[1793]: Removed session 5. Feb 9 09:47:00.854255 systemd[1]: Started sshd@5-172.31.30.62:22-139.178.89.65:47856.service. Feb 9 09:47:01.026188 sshd[2067]: Accepted publickey for core from 139.178.89.65 port 47856 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:01.028865 sshd[2067]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:01.037873 systemd[1]: Started session-6.scope. Feb 9 09:47:01.040035 systemd-logind[1793]: New session 6 of user core. Feb 9 09:47:01.148027 sudo[2072]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 09:47:01.148545 sudo[2072]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:47:01.153687 sudo[2072]: pam_unix(sudo:session): session closed for user root Feb 9 09:47:01.162482 sudo[2071]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 09:47:01.163544 sudo[2071]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:47:01.181031 systemd[1]: Stopping audit-rules.service... Feb 9 09:47:01.181000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 09:47:01.183794 auditctl[2075]: No rules Feb 9 09:47:01.185361 kernel: kauditd_printk_skb: 38 callbacks suppressed Feb 9 09:47:01.185457 kernel: audit: type=1305 audit(1707472021.181:128): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 09:47:01.186096 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 09:47:01.186631 systemd[1]: Stopped audit-rules.service. Feb 9 09:47:01.190397 systemd[1]: Starting audit-rules.service... Feb 9 09:47:01.181000 audit[2075]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffda833a80 a2=420 a3=0 items=0 ppid=1 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:01.203210 kernel: audit: type=1300 audit(1707472021.181:128): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffda833a80 a2=420 a3=0 items=0 ppid=1 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:01.181000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 09:47:01.214395 kernel: audit: type=1327 audit(1707472021.181:128): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 09:47:01.214499 kernel: audit: type=1131 audit(1707472021.185:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.230151 augenrules[2093]: No rules Feb 9 09:47:01.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.231889 systemd[1]: Finished audit-rules.service. Feb 9 09:47:01.241269 sudo[2071]: pam_unix(sudo:session): session closed for user root Feb 9 09:47:01.240000 audit[2071]: USER_END pid=2071 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.252152 kernel: audit: type=1130 audit(1707472021.231:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.252226 kernel: audit: type=1106 audit(1707472021.240:131): pid=2071 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.252269 kernel: audit: type=1104 audit(1707472021.240:132): pid=2071 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.240000 audit[2071]: CRED_DISP pid=2071 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.264750 sshd[2067]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:01.265000 audit[2067]: USER_END pid=2067 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:47:01.265000 audit[2067]: CRED_DISP pid=2067 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:47:01.279728 systemd[1]: sshd@5-172.31.30.62:22-139.178.89.65:47856.service: Deactivated successfully. Feb 9 09:47:01.280974 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:47:01.289167 kernel: audit: type=1106 audit(1707472021.265:133): pid=2067 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:47:01.289265 kernel: audit: type=1104 audit(1707472021.265:134): pid=2067 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:47:01.290774 systemd-logind[1793]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:47:01.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.30.62:22-139.178.89.65:47856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.294151 systemd[1]: Started sshd@6-172.31.30.62:22-139.178.89.65:47860.service. Feb 9 09:47:01.300805 kernel: audit: type=1131 audit(1707472021.278:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.30.62:22-139.178.89.65:47856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.30.62:22-139.178.89.65:47860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.301775 systemd-logind[1793]: Removed session 6. Feb 9 09:47:01.469000 audit[2100]: USER_ACCT pid=2100 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:47:01.470197 sshd[2100]: Accepted publickey for core from 139.178.89.65 port 47860 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:01.471000 audit[2100]: CRED_ACQ pid=2100 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:47:01.471000 audit[2100]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcdc78ea0 a2=3 a3=1 items=0 ppid=1 pid=2100 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:01.471000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:47:01.473227 sshd[2100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:01.481737 systemd-logind[1793]: New session 7 of user core. Feb 9 09:47:01.482675 systemd[1]: Started session-7.scope. Feb 9 09:47:01.492000 audit[2100]: USER_START pid=2100 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:47:01.495000 audit[2103]: CRED_ACQ pid=2103 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:47:01.590000 audit[2104]: USER_ACCT pid=2104 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.590809 sudo[2104]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:47:01.591000 audit[2104]: CRED_REFR pid=2104 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.592078 sudo[2104]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:47:01.594000 audit[2104]: USER_START pid=2104 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.276957 systemd[1]: Starting docker.service... Feb 9 09:47:02.354550 env[2119]: time="2024-02-09T09:47:02.354456761Z" level=info msg="Starting up" Feb 9 09:47:02.357142 env[2119]: time="2024-02-09T09:47:02.357097932Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:47:02.357327 env[2119]: time="2024-02-09T09:47:02.357288258Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:47:02.357484 env[2119]: time="2024-02-09T09:47:02.357451190Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:47:02.357613 env[2119]: time="2024-02-09T09:47:02.357564839Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:47:02.360245 env[2119]: time="2024-02-09T09:47:02.360203120Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:47:02.360418 env[2119]: time="2024-02-09T09:47:02.360390265Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:47:02.360542 env[2119]: time="2024-02-09T09:47:02.360511521Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:47:02.360714 env[2119]: time="2024-02-09T09:47:02.360686608Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:47:02.372140 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2043073799-merged.mount: Deactivated successfully. Feb 9 09:47:03.022587 env[2119]: time="2024-02-09T09:47:03.022485224Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 09:47:03.022587 env[2119]: time="2024-02-09T09:47:03.022533082Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 09:47:03.022937 env[2119]: time="2024-02-09T09:47:03.022822202Z" level=info msg="Loading containers: start." Feb 9 09:47:03.095000 audit[2150]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.095000 audit[2150]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc249ec20 a2=0 a3=1 items=0 ppid=2119 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.095000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 9 09:47:03.099000 audit[2152]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.099000 audit[2152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe537fc00 a2=0 a3=1 items=0 ppid=2119 pid=2152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.099000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 9 09:47:03.103000 audit[2154]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.103000 audit[2154]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe455c820 a2=0 a3=1 items=0 ppid=2119 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.103000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 09:47:03.107000 audit[2156]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.107000 audit[2156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd57bb450 a2=0 a3=1 items=0 ppid=2119 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.107000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 09:47:03.112000 audit[2158]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.112000 audit[2158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff5e1f0a0 a2=0 a3=1 items=0 ppid=2119 pid=2158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.112000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 9 09:47:03.138000 audit[2163]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.138000 audit[2163]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff1022c80 a2=0 a3=1 items=0 ppid=2119 pid=2163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.138000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 9 09:47:03.150000 audit[2165]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.150000 audit[2165]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe56ab2b0 a2=0 a3=1 items=0 ppid=2119 pid=2165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.150000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 9 09:47:03.154000 audit[2167]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2167 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.154000 audit[2167]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff802d040 a2=0 a3=1 items=0 ppid=2119 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.154000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 9 09:47:03.158000 audit[2169]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.158000 audit[2169]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffc99431e0 a2=0 a3=1 items=0 ppid=2119 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.158000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 09:47:03.174000 audit[2173]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.174000 audit[2173]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe3fb3a50 a2=0 a3=1 items=0 ppid=2119 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.174000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 09:47:03.178000 audit[2174]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2174 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.178000 audit[2174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc9c36fc0 a2=0 a3=1 items=0 ppid=2119 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.178000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 09:47:03.189642 kernel: Initializing XFRM netlink socket Feb 9 09:47:03.232146 env[2119]: time="2024-02-09T09:47:03.232093518Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:47:03.233943 (udev-worker)[2130]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:03.262000 audit[2182]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.262000 audit[2182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffff83af10 a2=0 a3=1 items=0 ppid=2119 pid=2182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.262000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 9 09:47:03.275000 audit[2185]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2185 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.275000 audit[2185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffe7c49ed0 a2=0 a3=1 items=0 ppid=2119 pid=2185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.275000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 9 09:47:03.281000 audit[2188]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.281000 audit[2188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd6258750 a2=0 a3=1 items=0 ppid=2119 pid=2188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.281000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 9 09:47:03.286000 audit[2190]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.286000 audit[2190]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffed7ac480 a2=0 a3=1 items=0 ppid=2119 pid=2190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.286000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 9 09:47:03.290000 audit[2192]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.290000 audit[2192]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffeacba230 a2=0 a3=1 items=0 ppid=2119 pid=2192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.290000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 9 09:47:03.294000 audit[2194]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2194 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.294000 audit[2194]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffdedc4e00 a2=0 a3=1 items=0 ppid=2119 pid=2194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.294000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 9 09:47:03.299000 audit[2196]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2196 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.299000 audit[2196]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffc9882d70 a2=0 a3=1 items=0 ppid=2119 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.299000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 9 09:47:03.312000 audit[2199]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.312000 audit[2199]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffc285fff0 a2=0 a3=1 items=0 ppid=2119 pid=2199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.312000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 9 09:47:03.316000 audit[2201]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.316000 audit[2201]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffd36fd5d0 a2=0 a3=1 items=0 ppid=2119 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.316000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 09:47:03.321000 audit[2203]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2203 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.321000 audit[2203]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffcf84b720 a2=0 a3=1 items=0 ppid=2119 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.321000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 09:47:03.326000 audit[2205]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2205 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.326000 audit[2205]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc21029d0 a2=0 a3=1 items=0 ppid=2119 pid=2205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.326000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 9 09:47:03.327916 systemd-networkd[1595]: docker0: Link UP Feb 9 09:47:03.344000 audit[2209]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.344000 audit[2209]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffff8df690 a2=0 a3=1 items=0 ppid=2119 pid=2209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.344000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 09:47:03.346000 audit[2210]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:03.346000 audit[2210]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd17a0960 a2=0 a3=1 items=0 ppid=2119 pid=2210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.346000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 09:47:03.348505 env[2119]: time="2024-02-09T09:47:03.348461226Z" level=info msg="Loading containers: done." Feb 9 09:47:03.370889 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1405530208-merged.mount: Deactivated successfully. Feb 9 09:47:03.390207 env[2119]: time="2024-02-09T09:47:03.390131310Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:47:03.390928 env[2119]: time="2024-02-09T09:47:03.390607864Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:47:03.390928 env[2119]: time="2024-02-09T09:47:03.390863853Z" level=info msg="Daemon has completed initialization" Feb 9 09:47:03.420347 systemd[1]: Started docker.service. Feb 9 09:47:03.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:03.431171 env[2119]: time="2024-02-09T09:47:03.431099214Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:47:03.471588 systemd[1]: Reloading. Feb 9 09:47:03.596340 /usr/lib/systemd/system-generators/torcx-generator[2260]: time="2024-02-09T09:47:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:47:03.605746 /usr/lib/systemd/system-generators/torcx-generator[2260]: time="2024-02-09T09:47:03Z" level=info msg="torcx already run" Feb 9 09:47:03.788415 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:47:03.788699 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:47:03.832114 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:47:04.038766 systemd[1]: Started kubelet.service. Feb 9 09:47:04.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:04.163999 kubelet[2318]: E0209 09:47:04.163889 2318 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:47:04.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 09:47:04.167879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:47:04.168308 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:47:04.611320 env[1801]: time="2024-02-09T09:47:04.611233674Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 09:47:05.209717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1585497529.mount: Deactivated successfully. Feb 9 09:47:07.531915 env[1801]: time="2024-02-09T09:47:07.531837965Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:07.536054 env[1801]: time="2024-02-09T09:47:07.535958855Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:07.541393 env[1801]: time="2024-02-09T09:47:07.541309021Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:07.545344 env[1801]: time="2024-02-09T09:47:07.545261621Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:07.547172 env[1801]: time="2024-02-09T09:47:07.547125035Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 09:47:07.563730 env[1801]: time="2024-02-09T09:47:07.563679874Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 09:47:08.125699 amazon-ssm-agent[1853]: 2024-02-09 09:47:08 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 09:47:10.003812 env[1801]: time="2024-02-09T09:47:10.003744119Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:10.008788 env[1801]: time="2024-02-09T09:47:10.008738075Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:10.014000 env[1801]: time="2024-02-09T09:47:10.013951153Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:10.021546 env[1801]: time="2024-02-09T09:47:10.021494910Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:10.023575 env[1801]: time="2024-02-09T09:47:10.023515274Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 09:47:10.042492 env[1801]: time="2024-02-09T09:47:10.042425515Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 09:47:11.450765 env[1801]: time="2024-02-09T09:47:11.450703890Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:11.453985 env[1801]: time="2024-02-09T09:47:11.453935332Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:11.457186 env[1801]: time="2024-02-09T09:47:11.457119406Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:11.460590 env[1801]: time="2024-02-09T09:47:11.460506145Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:11.462362 env[1801]: time="2024-02-09T09:47:11.462315864Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 09:47:11.478177 env[1801]: time="2024-02-09T09:47:11.478125273Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:47:12.722268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268637828.mount: Deactivated successfully. Feb 9 09:47:13.390230 env[1801]: time="2024-02-09T09:47:13.390170283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:13.394284 env[1801]: time="2024-02-09T09:47:13.394234114Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:13.396728 env[1801]: time="2024-02-09T09:47:13.396665414Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:13.399386 env[1801]: time="2024-02-09T09:47:13.399325787Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:13.400556 env[1801]: time="2024-02-09T09:47:13.400513191Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 09:47:13.418491 env[1801]: time="2024-02-09T09:47:13.418402105Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:47:13.884032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount92090909.mount: Deactivated successfully. Feb 9 09:47:13.894023 env[1801]: time="2024-02-09T09:47:13.893969606Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:13.897054 env[1801]: time="2024-02-09T09:47:13.897007507Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:13.899686 env[1801]: time="2024-02-09T09:47:13.899640602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:13.903789 env[1801]: time="2024-02-09T09:47:13.903746245Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:13.905340 env[1801]: time="2024-02-09T09:47:13.905288385Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 09:47:13.924033 env[1801]: time="2024-02-09T09:47:13.923970500Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 09:47:14.255299 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:47:14.266609 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 9 09:47:14.266784 kernel: audit: type=1130 audit(1707472034.255:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:14.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:14.255647 systemd[1]: Stopped kubelet.service. Feb 9 09:47:14.258596 systemd[1]: Started kubelet.service. Feb 9 09:47:14.276461 kernel: audit: type=1131 audit(1707472034.255:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:14.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:14.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:14.285418 kernel: audit: type=1130 audit(1707472034.257:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:14.375266 kubelet[2363]: E0209 09:47:14.375178 2363 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:47:14.384161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:47:14.384559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:47:14.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 09:47:14.394871 kernel: audit: type=1131 audit(1707472034.384:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 09:47:15.086021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount859657127.mount: Deactivated successfully. Feb 9 09:47:18.615228 env[1801]: time="2024-02-09T09:47:18.615168772Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:18.618490 env[1801]: time="2024-02-09T09:47:18.618439640Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:18.621648 env[1801]: time="2024-02-09T09:47:18.621600846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:18.624993 env[1801]: time="2024-02-09T09:47:18.624947984Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:18.626317 env[1801]: time="2024-02-09T09:47:18.626273266Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 09:47:18.644204 env[1801]: time="2024-02-09T09:47:18.644136164Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 09:47:19.216816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount934657739.mount: Deactivated successfully. Feb 9 09:47:19.934590 env[1801]: time="2024-02-09T09:47:19.934507824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:19.938810 env[1801]: time="2024-02-09T09:47:19.938760685Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:19.941666 env[1801]: time="2024-02-09T09:47:19.941607747Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:19.944542 env[1801]: time="2024-02-09T09:47:19.944479477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:19.945789 env[1801]: time="2024-02-09T09:47:19.945745338Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 09:47:20.694463 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 09:47:20.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:20.704967 kernel: audit: type=1131 audit(1707472040.694:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:24.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:24.505218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 09:47:24.505524 systemd[1]: Stopped kubelet.service. Feb 9 09:47:24.508406 systemd[1]: Started kubelet.service. Feb 9 09:47:24.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:24.528349 kernel: audit: type=1130 audit(1707472044.504:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:24.528433 kernel: audit: type=1131 audit(1707472044.505:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:24.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:24.542432 kernel: audit: type=1130 audit(1707472044.509:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:24.634766 kubelet[2435]: E0209 09:47:24.634677 2435 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:47:24.638162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:47:24.638615 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:47:24.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 09:47:24.650610 kernel: audit: type=1131 audit(1707472044.638:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 09:47:26.958764 systemd[1]: Stopped kubelet.service. Feb 9 09:47:26.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:26.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:26.977306 kernel: audit: type=1130 audit(1707472046.957:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:26.977411 kernel: audit: type=1131 audit(1707472046.957:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:27.001681 systemd[1]: Reloading. Feb 9 09:47:27.129148 /usr/lib/systemd/system-generators/torcx-generator[2468]: time="2024-02-09T09:47:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:47:27.136703 /usr/lib/systemd/system-generators/torcx-generator[2468]: time="2024-02-09T09:47:27Z" level=info msg="torcx already run" Feb 9 09:47:27.297207 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:47:27.297248 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:47:27.340074 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:47:27.552421 systemd[1]: Started kubelet.service. Feb 9 09:47:27.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:27.562625 kernel: audit: type=1130 audit(1707472047.553:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:27.661310 kubelet[2527]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:47:27.661969 kubelet[2527]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:47:27.662234 kubelet[2527]: I0209 09:47:27.662182 2527 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:47:27.664718 kubelet[2527]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:47:27.664877 kubelet[2527]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:47:28.168412 kubelet[2527]: I0209 09:47:28.168371 2527 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:47:28.168638 kubelet[2527]: I0209 09:47:28.168616 2527 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:47:28.169128 kubelet[2527]: I0209 09:47:28.169104 2527 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:47:28.177219 kubelet[2527]: E0209 09:47:28.177164 2527 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:28.177364 kubelet[2527]: I0209 09:47:28.177258 2527 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:47:28.178357 kubelet[2527]: W0209 09:47:28.178317 2527 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:47:28.179732 kubelet[2527]: I0209 09:47:28.179683 2527 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:47:28.180591 kubelet[2527]: I0209 09:47:28.180526 2527 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:47:28.180832 kubelet[2527]: I0209 09:47:28.180792 2527 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:47:28.181003 kubelet[2527]: I0209 09:47:28.180840 2527 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:47:28.181003 kubelet[2527]: I0209 09:47:28.180866 2527 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:47:28.181159 kubelet[2527]: I0209 09:47:28.181060 2527 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:47:28.186747 kubelet[2527]: I0209 09:47:28.186630 2527 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:47:28.186747 kubelet[2527]: I0209 09:47:28.186750 2527 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:47:28.186988 kubelet[2527]: I0209 09:47:28.186839 2527 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:47:28.186988 kubelet[2527]: I0209 09:47:28.186882 2527 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:47:28.189819 kubelet[2527]: W0209 09:47:28.189739 2527 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.30.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-62&limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:28.190149 kubelet[2527]: E0209 09:47:28.190123 2527 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-62&limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:28.191175 kubelet[2527]: I0209 09:47:28.191141 2527 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:47:28.193249 kubelet[2527]: W0209 09:47:28.193201 2527 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:47:28.197113 kubelet[2527]: I0209 09:47:28.197074 2527 server.go:1186] "Started kubelet" Feb 9 09:47:28.198000 audit[2527]: AVC avc: denied { mac_admin } for pid=2527 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:47:28.205812 kubelet[2527]: E0209 09:47:28.204317 2527 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-30-62.17b228c800b1650e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-30-62", UID:"ip-172-31-30-62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-30-62"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 28, 197035278, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 28, 197035278, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.30.62:6443/api/v1/namespaces/default/events": dial tcp 172.31.30.62:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:47:28.205812 kubelet[2527]: W0209 09:47:28.205022 2527 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.30.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:28.205812 kubelet[2527]: E0209 09:47:28.205112 2527 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:28.206204 kubelet[2527]: I0209 09:47:28.206173 2527 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 09:47:28.206829 kubelet[2527]: I0209 09:47:28.206805 2527 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 09:47:28.207102 kubelet[2527]: I0209 09:47:28.207081 2527 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:47:28.207510 kubelet[2527]: E0209 09:47:28.207485 2527 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:47:28.207683 kubelet[2527]: E0209 09:47:28.207662 2527 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:47:28.198000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:47:28.212130 kubelet[2527]: I0209 09:47:28.212102 2527 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:47:28.212389 kernel: audit: type=1400 audit(1707472048.198:184): avc: denied { mac_admin } for pid=2527 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:47:28.212496 kernel: audit: type=1401 audit(1707472048.198:184): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:47:28.212922 kubelet[2527]: I0209 09:47:28.212888 2527 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:47:28.213659 kubelet[2527]: I0209 09:47:28.213624 2527 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:47:28.198000 audit[2527]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c84db0 a1=4000d18900 a2=4000c84d80 a3=25 items=0 ppid=1 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.215787 kubelet[2527]: I0209 09:47:28.215753 2527 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:47:28.220892 kubelet[2527]: W0209 09:47:28.220832 2527 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.30.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:28.221139 kubelet[2527]: E0209 09:47:28.221117 2527 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:28.221380 kubelet[2527]: E0209 09:47:28.221349 2527 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.30.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-62?timeout=10s": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:28.225145 kernel: audit: type=1300 audit(1707472048.198:184): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c84db0 a1=4000d18900 a2=4000c84d80 a3=25 items=0 ppid=1 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.198000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:47:28.236943 kernel: audit: type=1327 audit(1707472048.198:184): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:47:28.204000 audit[2527]: AVC avc: denied { mac_admin } for pid=2527 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:47:28.246303 kernel: audit: type=1400 audit(1707472048.204:185): avc: denied { mac_admin } for pid=2527 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:47:28.246413 kernel: audit: type=1401 audit(1707472048.204:185): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:47:28.204000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:47:28.204000 audit[2527]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000d148c0 a1=4000d18918 a2=4000c84e70 a3=25 items=0 ppid=1 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.261951 kernel: audit: type=1300 audit(1707472048.204:185): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000d148c0 a1=4000d18918 a2=4000c84e70 a3=25 items=0 ppid=1 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.204000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:47:28.236000 audit[2537]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.236000 audit[2537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd9b3f4c0 a2=0 a3=1 items=0 ppid=2527 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.236000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 09:47:28.243000 audit[2538]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.243000 audit[2538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6445130 a2=0 a3=1 items=0 ppid=2527 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.243000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 09:47:28.245000 audit[2540]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.245000 audit[2540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff3407160 a2=0 a3=1 items=0 ppid=2527 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.245000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 09:47:28.249000 audit[2542]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.249000 audit[2542]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcaec3690 a2=0 a3=1 items=0 ppid=2527 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.249000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 09:47:28.277000 audit[2549]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.277000 audit[2549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffcee6ad00 a2=0 a3=1 items=0 ppid=2527 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 09:47:28.279000 audit[2550]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=2550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.279000 audit[2550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe644cff0 a2=0 a3=1 items=0 ppid=2527 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.279000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 09:47:28.291000 audit[2553]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.291000 audit[2553]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffffb1e3a00 a2=0 a3=1 items=0 ppid=2527 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.291000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 09:47:28.300000 audit[2556]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.300000 audit[2556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffcbcf7f70 a2=0 a3=1 items=0 ppid=2527 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.300000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 09:47:28.302000 audit[2557]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.302000 audit[2557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee0b7800 a2=0 a3=1 items=0 ppid=2527 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.302000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 09:47:28.304000 audit[2558]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.304000 audit[2558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe19ff850 a2=0 a3=1 items=0 ppid=2527 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.304000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 09:47:28.308000 audit[2560]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=2560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.308000 audit[2560]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc65a3140 a2=0 a3=1 items=0 ppid=2527 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.308000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 09:47:28.312000 audit[2562]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=2562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.312000 audit[2562]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffca6704a0 a2=0 a3=1 items=0 ppid=2527 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.312000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 09:47:28.316000 audit[2564]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.316000 audit[2564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffcbe440b0 a2=0 a3=1 items=0 ppid=2527 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.316000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 09:47:28.322018 kubelet[2527]: I0209 09:47:28.321967 2527 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-62" Feb 9 09:47:28.323112 kubelet[2527]: E0209 09:47:28.323086 2527 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.62:6443/api/v1/nodes\": dial tcp 172.31.30.62:6443: connect: connection refused" node="ip-172-31-30-62" Feb 9 09:47:28.325000 audit[2567]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=2567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.325000 audit[2567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffcd44fbd0 a2=0 a3=1 items=0 ppid=2527 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.325000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 09:47:28.330000 audit[2569]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=2569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.330000 audit[2569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=fffffc8d3f20 a2=0 a3=1 items=0 ppid=2527 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.330000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 09:47:28.333051 kubelet[2527]: I0209 09:47:28.333010 2527 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:47:28.333000 audit[2570]: NETFILTER_CFG table=mangle:41 family=10 entries=2 op=nft_register_chain pid=2570 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:28.333000 audit[2570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffeb385180 a2=0 a3=1 items=0 ppid=2527 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.333000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 09:47:28.335000 audit[2571]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=2571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.335000 audit[2571]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc865200 a2=0 a3=1 items=0 ppid=2527 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 09:47:28.336000 audit[2572]: NETFILTER_CFG table=nat:43 family=10 entries=2 op=nft_register_chain pid=2572 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:28.336000 audit[2572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffffa135910 a2=0 a3=1 items=0 ppid=2527 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.336000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 09:47:28.342000 audit[2573]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_chain pid=2573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.342000 audit[2573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9ebf040 a2=0 a3=1 items=0 ppid=2527 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 09:47:28.345000 audit[2575]: NETFILTER_CFG table=nat:45 family=10 entries=1 op=nft_register_rule pid=2575 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:28.345000 audit[2575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc68c0400 a2=0 a3=1 items=0 ppid=2527 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 09:47:28.348000 audit[2577]: NETFILTER_CFG table=filter:46 family=10 entries=2 op=nft_register_chain pid=2577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:28.348000 audit[2577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffe3189c10 a2=0 a3=1 items=0 ppid=2527 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.348000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 09:47:28.350000 audit[2576]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:28.350000 audit[2576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe379c0a0 a2=0 a3=1 items=0 ppid=2527 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 09:47:28.358438 kubelet[2527]: I0209 09:47:28.358399 2527 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:47:28.358438 kubelet[2527]: I0209 09:47:28.358436 2527 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:47:28.358704 kubelet[2527]: I0209 09:47:28.358470 2527 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:47:28.356000 audit[2579]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_rule pid=2579 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:28.356000 audit[2579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffffcb73980 a2=0 a3=1 items=0 ppid=2527 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.356000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 09:47:28.367879 kubelet[2527]: I0209 09:47:28.367842 2527 policy_none.go:49] "None policy: Start" Feb 9 09:47:28.366000 audit[2580]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=2580 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:28.366000 audit[2580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe2d06210 a2=0 a3=1 items=0 ppid=2527 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.366000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 09:47:28.370087 kubelet[2527]: I0209 09:47:28.370053 2527 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:47:28.370286 kubelet[2527]: I0209 09:47:28.370263 2527 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:47:28.369000 audit[2581]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2581 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:28.369000 audit[2581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda495720 a2=0 a3=1 items=0 ppid=2527 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.369000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 09:47:28.375000 audit[2583]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_rule pid=2583 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:28.375000 audit[2583]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe62be180 a2=0 a3=1 items=0 ppid=2527 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.375000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 09:47:28.379000 audit[2585]: NETFILTER_CFG table=nat:52 family=10 entries=2 op=nft_register_chain pid=2585 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:28.379000 audit[2585]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe8330cb0 a2=0 a3=1 items=0 ppid=2527 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.379000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 09:47:28.386000 audit[2587]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_rule pid=2587 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:28.386000 audit[2587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffd45077d0 a2=0 a3=1 items=0 ppid=2527 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.386000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 09:47:28.396256 kubelet[2527]: I0209 09:47:28.396222 2527 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:47:28.397000 audit[2527]: AVC avc: denied { mac_admin } for pid=2527 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:47:28.397000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:47:28.397000 audit[2527]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b3c630 a1=400068c570 a2=4000b3c600 a3=25 items=0 ppid=1 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.397000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:47:28.399490 kubelet[2527]: I0209 09:47:28.399205 2527 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 09:47:28.399608 kubelet[2527]: I0209 09:47:28.399521 2527 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:47:28.401053 kubelet[2527]: E0209 09:47:28.401010 2527 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-62\" not found" Feb 9 09:47:28.402000 audit[2589]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=2589 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:28.402000 audit[2589]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffd794f350 a2=0 a3=1 items=0 ppid=2527 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.402000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 09:47:28.410000 audit[2591]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=2591 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:28.410000 audit[2591]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffcd29e800 a2=0 a3=1 items=0 ppid=2527 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.410000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 09:47:28.413093 kubelet[2527]: I0209 09:47:28.413054 2527 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:47:28.413296 kubelet[2527]: I0209 09:47:28.413275 2527 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:47:28.413756 kubelet[2527]: I0209 09:47:28.413730 2527 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:47:28.414493 kubelet[2527]: E0209 09:47:28.414463 2527 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:47:28.414899 kubelet[2527]: W0209 09:47:28.414815 2527 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.30.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:28.415099 kubelet[2527]: E0209 09:47:28.415067 2527 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:28.415000 audit[2592]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=2592 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:28.415000 audit[2592]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc8fccac0 a2=0 a3=1 items=0 ppid=2527 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.415000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 09:47:28.417000 audit[2593]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=2593 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:28.417000 audit[2593]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4494d20 a2=0 a3=1 items=0 ppid=2527 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.417000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 09:47:28.422763 kubelet[2527]: E0209 09:47:28.422718 2527 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.30.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-62?timeout=10s": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:28.423000 audit[2594]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:28.423000 audit[2594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcc0c7c10 a2=0 a3=1 items=0 ppid=2527 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:28.423000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 09:47:28.513871 kubelet[2527]: E0209 09:47:28.513744 2527 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-30-62.17b228c800b1650e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-30-62", UID:"ip-172-31-30-62", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-30-62"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 28, 197035278, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 28, 197035278, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.30.62:6443/api/v1/namespaces/default/events": dial tcp 172.31.30.62:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:47:28.515867 kubelet[2527]: I0209 09:47:28.515831 2527 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:28.517710 kubelet[2527]: I0209 09:47:28.517672 2527 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:28.523220 kubelet[2527]: I0209 09:47:28.523179 2527 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:28.524253 kubelet[2527]: I0209 09:47:28.524223 2527 status_manager.go:698] "Failed to get status for pod" podUID=76ea3c11550ffdc2a2de0a6cb4c4a353 pod="kube-system/kube-scheduler-ip-172-31-30-62" err="Get \"https://172.31.30.62:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-31-30-62\": dial tcp 172.31.30.62:6443: connect: connection refused" Feb 9 09:47:28.526553 kubelet[2527]: I0209 09:47:28.526510 2527 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-62" Feb 9 09:47:28.527808 kubelet[2527]: I0209 09:47:28.527773 2527 status_manager.go:698] "Failed to get status for pod" podUID=d620f9b5166534473af9b27e0db1bfae pod="kube-system/kube-apiserver-ip-172-31-30-62" err="Get \"https://172.31.30.62:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-30-62\": dial tcp 172.31.30.62:6443: connect: connection refused" Feb 9 09:47:28.528076 kubelet[2527]: E0209 09:47:28.528054 2527 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.62:6443/api/v1/nodes\": dial tcp 172.31.30.62:6443: connect: connection refused" node="ip-172-31-30-62" Feb 9 09:47:28.532524 kubelet[2527]: I0209 09:47:28.532468 2527 status_manager.go:698] "Failed to get status for pod" podUID=23ca3dcdc4fdaa1a78559bb3d6daa8bd pod="kube-system/kube-controller-manager-ip-172-31-30-62" err="Get \"https://172.31.30.62:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-31-30-62\": dial tcp 172.31.30.62:6443: connect: connection refused" Feb 9 09:47:28.623630 kubelet[2527]: I0209 09:47:28.623592 2527 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d620f9b5166534473af9b27e0db1bfae-ca-certs\") pod \"kube-apiserver-ip-172-31-30-62\" (UID: \"d620f9b5166534473af9b27e0db1bfae\") " pod="kube-system/kube-apiserver-ip-172-31-30-62" Feb 9 09:47:28.623892 kubelet[2527]: I0209 09:47:28.623856 2527 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23ca3dcdc4fdaa1a78559bb3d6daa8bd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-62\" (UID: \"23ca3dcdc4fdaa1a78559bb3d6daa8bd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-62" Feb 9 09:47:28.624087 kubelet[2527]: I0209 09:47:28.624048 2527 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23ca3dcdc4fdaa1a78559bb3d6daa8bd-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-62\" (UID: \"23ca3dcdc4fdaa1a78559bb3d6daa8bd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-62" Feb 9 09:47:28.624285 kubelet[2527]: I0209 09:47:28.624247 2527 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23ca3dcdc4fdaa1a78559bb3d6daa8bd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-62\" (UID: \"23ca3dcdc4fdaa1a78559bb3d6daa8bd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-62" Feb 9 09:47:28.624466 kubelet[2527]: I0209 09:47:28.624436 2527 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23ca3dcdc4fdaa1a78559bb3d6daa8bd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-62\" (UID: \"23ca3dcdc4fdaa1a78559bb3d6daa8bd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-62" Feb 9 09:47:28.624702 kubelet[2527]: I0209 09:47:28.624664 2527 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23ca3dcdc4fdaa1a78559bb3d6daa8bd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-62\" (UID: \"23ca3dcdc4fdaa1a78559bb3d6daa8bd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-62" Feb 9 09:47:28.624886 kubelet[2527]: I0209 09:47:28.624856 2527 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/76ea3c11550ffdc2a2de0a6cb4c4a353-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-62\" (UID: \"76ea3c11550ffdc2a2de0a6cb4c4a353\") " pod="kube-system/kube-scheduler-ip-172-31-30-62" Feb 9 09:47:28.625053 kubelet[2527]: I0209 09:47:28.625023 2527 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d620f9b5166534473af9b27e0db1bfae-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-62\" (UID: \"d620f9b5166534473af9b27e0db1bfae\") " pod="kube-system/kube-apiserver-ip-172-31-30-62" Feb 9 09:47:28.625220 kubelet[2527]: I0209 09:47:28.625190 2527 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d620f9b5166534473af9b27e0db1bfae-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-62\" (UID: \"d620f9b5166534473af9b27e0db1bfae\") " pod="kube-system/kube-apiserver-ip-172-31-30-62" Feb 9 09:47:28.824271 kubelet[2527]: E0209 09:47:28.824220 2527 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.30.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-62?timeout=10s": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:28.832600 env[1801]: time="2024-02-09T09:47:28.832501758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-62,Uid:76ea3c11550ffdc2a2de0a6cb4c4a353,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:28.839036 env[1801]: time="2024-02-09T09:47:28.838501115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-62,Uid:d620f9b5166534473af9b27e0db1bfae,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:28.841883 env[1801]: time="2024-02-09T09:47:28.841257741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-62,Uid:23ca3dcdc4fdaa1a78559bb3d6daa8bd,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:28.930984 kubelet[2527]: I0209 09:47:28.930944 2527 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-62" Feb 9 09:47:28.931431 kubelet[2527]: E0209 09:47:28.931398 2527 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.62:6443/api/v1/nodes\": dial tcp 172.31.30.62:6443: connect: connection refused" node="ip-172-31-30-62" Feb 9 09:47:29.031741 kubelet[2527]: W0209 09:47:29.031657 2527 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.30.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:29.031741 kubelet[2527]: E0209 09:47:29.031747 2527 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:29.092162 kubelet[2527]: W0209 09:47:29.091321 2527 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.30.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:29.092162 kubelet[2527]: E0209 09:47:29.091431 2527 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:29.311360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3782192263.mount: Deactivated successfully. Feb 9 09:47:29.322534 env[1801]: time="2024-02-09T09:47:29.322469531Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:29.324382 env[1801]: time="2024-02-09T09:47:29.324336700Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:29.329317 env[1801]: time="2024-02-09T09:47:29.329268515Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:29.331624 env[1801]: time="2024-02-09T09:47:29.331536380Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:29.334252 env[1801]: time="2024-02-09T09:47:29.334198133Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:29.336222 env[1801]: time="2024-02-09T09:47:29.336165632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:29.339271 env[1801]: time="2024-02-09T09:47:29.339212184Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:29.343173 env[1801]: time="2024-02-09T09:47:29.342356594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:29.345662 env[1801]: time="2024-02-09T09:47:29.345610519Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:29.350307 env[1801]: time="2024-02-09T09:47:29.350237922Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:29.362038 env[1801]: time="2024-02-09T09:47:29.361986456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:29.382422 env[1801]: time="2024-02-09T09:47:29.382312013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:29.384042 env[1801]: time="2024-02-09T09:47:29.382392580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:29.384560 env[1801]: time="2024-02-09T09:47:29.384008394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:29.385251 env[1801]: time="2024-02-09T09:47:29.385188451Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:29.394014 env[1801]: time="2024-02-09T09:47:29.385535044Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/140e509204bf0d902409090f3b0e506051677874b7fd09d1d463d59a2b65f86e pid=2603 runtime=io.containerd.runc.v2 Feb 9 09:47:29.454076 env[1801]: time="2024-02-09T09:47:29.453867367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:29.454651 env[1801]: time="2024-02-09T09:47:29.454341858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:29.454907 env[1801]: time="2024-02-09T09:47:29.454831383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:29.456448 env[1801]: time="2024-02-09T09:47:29.456344381Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/14ced5613f4aa1e2949dc40ca0438202c86aa599b2e4b98c936de4626233bdc3 pid=2638 runtime=io.containerd.runc.v2 Feb 9 09:47:29.458022 env[1801]: time="2024-02-09T09:47:29.457889102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:29.458207 env[1801]: time="2024-02-09T09:47:29.457965706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:29.458207 env[1801]: time="2024-02-09T09:47:29.458007179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:29.459354 env[1801]: time="2024-02-09T09:47:29.459254403Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/35773c57b99d68c5f9d8697583779ddda7c809aae91d98fba261db9f69f673aa pid=2633 runtime=io.containerd.runc.v2 Feb 9 09:47:29.596320 env[1801]: time="2024-02-09T09:47:29.595307077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-62,Uid:23ca3dcdc4fdaa1a78559bb3d6daa8bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"140e509204bf0d902409090f3b0e506051677874b7fd09d1d463d59a2b65f86e\"" Feb 9 09:47:29.602431 env[1801]: time="2024-02-09T09:47:29.602357270Z" level=info msg="CreateContainer within sandbox \"140e509204bf0d902409090f3b0e506051677874b7fd09d1d463d59a2b65f86e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 09:47:29.625744 kubelet[2527]: E0209 09:47:29.625651 2527 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.30.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-62?timeout=10s": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:29.626700 env[1801]: time="2024-02-09T09:47:29.626628931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-62,Uid:76ea3c11550ffdc2a2de0a6cb4c4a353,Namespace:kube-system,Attempt:0,} returns sandbox id \"35773c57b99d68c5f9d8697583779ddda7c809aae91d98fba261db9f69f673aa\"" Feb 9 09:47:29.628407 env[1801]: time="2024-02-09T09:47:29.628326693Z" level=info msg="CreateContainer within sandbox \"140e509204bf0d902409090f3b0e506051677874b7fd09d1d463d59a2b65f86e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c854234f199e2ed200562cccc52746d75160a4eee23f2fa96baf021294ae21bc\"" Feb 9 09:47:29.629556 env[1801]: time="2024-02-09T09:47:29.629167097Z" level=info msg="StartContainer for \"c854234f199e2ed200562cccc52746d75160a4eee23f2fa96baf021294ae21bc\"" Feb 9 09:47:29.631741 env[1801]: time="2024-02-09T09:47:29.631687480Z" level=info msg="CreateContainer within sandbox \"35773c57b99d68c5f9d8697583779ddda7c809aae91d98fba261db9f69f673aa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 09:47:29.646555 env[1801]: time="2024-02-09T09:47:29.646495402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-62,Uid:d620f9b5166534473af9b27e0db1bfae,Namespace:kube-system,Attempt:0,} returns sandbox id \"14ced5613f4aa1e2949dc40ca0438202c86aa599b2e4b98c936de4626233bdc3\"" Feb 9 09:47:29.653835 env[1801]: time="2024-02-09T09:47:29.653762201Z" level=info msg="CreateContainer within sandbox \"14ced5613f4aa1e2949dc40ca0438202c86aa599b2e4b98c936de4626233bdc3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 09:47:29.667975 env[1801]: time="2024-02-09T09:47:29.667912020Z" level=info msg="CreateContainer within sandbox \"35773c57b99d68c5f9d8697583779ddda7c809aae91d98fba261db9f69f673aa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"20e8b47edce09d299df129f41d4bc60e6e26619ecd67621feec7967186d4bf11\"" Feb 9 09:47:29.668996 env[1801]: time="2024-02-09T09:47:29.668915455Z" level=info msg="StartContainer for \"20e8b47edce09d299df129f41d4bc60e6e26619ecd67621feec7967186d4bf11\"" Feb 9 09:47:29.696904 env[1801]: time="2024-02-09T09:47:29.696840081Z" level=info msg="CreateContainer within sandbox \"14ced5613f4aa1e2949dc40ca0438202c86aa599b2e4b98c936de4626233bdc3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2c7e690f644d4ea592054ed0d594925dd1d36fa2ac0b472cfe1981ec76dca3d0\"" Feb 9 09:47:29.702864 env[1801]: time="2024-02-09T09:47:29.702806321Z" level=info msg="StartContainer for \"2c7e690f644d4ea592054ed0d594925dd1d36fa2ac0b472cfe1981ec76dca3d0\"" Feb 9 09:47:29.720174 kubelet[2527]: W0209 09:47:29.720013 2527 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.30.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-62&limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:29.720174 kubelet[2527]: E0209 09:47:29.720101 2527 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-62&limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:29.735189 kubelet[2527]: I0209 09:47:29.734509 2527 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-62" Feb 9 09:47:29.735189 kubelet[2527]: E0209 09:47:29.735154 2527 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.62:6443/api/v1/nodes\": dial tcp 172.31.30.62:6443: connect: connection refused" node="ip-172-31-30-62" Feb 9 09:47:29.868940 env[1801]: time="2024-02-09T09:47:29.867861426Z" level=info msg="StartContainer for \"c854234f199e2ed200562cccc52746d75160a4eee23f2fa96baf021294ae21bc\" returns successfully" Feb 9 09:47:29.870298 kubelet[2527]: W0209 09:47:29.870159 2527 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.30.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:29.870298 kubelet[2527]: E0209 09:47:29.870225 2527 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.62:6443: connect: connection refused Feb 9 09:47:29.882364 env[1801]: time="2024-02-09T09:47:29.882300109Z" level=info msg="StartContainer for \"2c7e690f644d4ea592054ed0d594925dd1d36fa2ac0b472cfe1981ec76dca3d0\" returns successfully" Feb 9 09:47:29.999644 env[1801]: time="2024-02-09T09:47:29.999552520Z" level=info msg="StartContainer for \"20e8b47edce09d299df129f41d4bc60e6e26619ecd67621feec7967186d4bf11\" returns successfully" Feb 9 09:47:31.337736 kubelet[2527]: I0209 09:47:31.337702 2527 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-62" Feb 9 09:47:34.474183 update_engine[1794]: I0209 09:47:34.473635 1794 update_attempter.cc:509] Updating boot flags... Feb 9 09:47:35.200629 kubelet[2527]: I0209 09:47:35.199893 2527 apiserver.go:52] "Watching apiserver" Feb 9 09:47:35.278465 kubelet[2527]: I0209 09:47:35.278194 2527 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-30-62" Feb 9 09:47:35.314259 kubelet[2527]: I0209 09:47:35.314161 2527 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:47:35.342053 kubelet[2527]: I0209 09:47:35.341725 2527 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:47:35.442094 kubelet[2527]: E0209 09:47:35.441950 2527 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: namespaces "kube-node-lease" not found Feb 9 09:47:37.716587 systemd[1]: Reloading. Feb 9 09:47:37.851118 /usr/lib/systemd/system-generators/torcx-generator[3030]: time="2024-02-09T09:47:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:47:37.851183 /usr/lib/systemd/system-generators/torcx-generator[3030]: time="2024-02-09T09:47:37Z" level=info msg="torcx already run" Feb 9 09:47:38.045002 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:47:38.045042 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:47:38.091608 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:47:38.149021 amazon-ssm-agent[1853]: 2024-02-09 09:47:38 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 09:47:38.305094 kubelet[2527]: I0209 09:47:38.304510 2527 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:47:38.305171 systemd[1]: Stopping kubelet.service... Feb 9 09:47:38.326526 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 09:47:38.327284 systemd[1]: Stopped kubelet.service. Feb 9 09:47:38.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:38.330697 kernel: kauditd_printk_skb: 104 callbacks suppressed Feb 9 09:47:38.330808 kernel: audit: type=1131 audit(1707472058.325:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:38.341362 systemd[1]: Started kubelet.service. Feb 9 09:47:38.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:38.363801 kernel: audit: type=1130 audit(1707472058.340:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:38.503906 kubelet[3093]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:47:38.504489 kubelet[3093]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:47:38.504776 kubelet[3093]: I0209 09:47:38.504719 3093 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:47:38.510878 kubelet[3093]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:47:38.511555 kubelet[3093]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:47:38.521169 kubelet[3093]: I0209 09:47:38.521125 3093 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:47:38.521373 kubelet[3093]: I0209 09:47:38.521350 3093 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:47:38.522337 kubelet[3093]: I0209 09:47:38.521854 3093 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:47:38.524991 kubelet[3093]: I0209 09:47:38.524953 3093 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 09:47:38.526536 kubelet[3093]: I0209 09:47:38.526489 3093 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:47:38.530667 kubelet[3093]: W0209 09:47:38.530633 3093 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:47:38.532335 kubelet[3093]: I0209 09:47:38.532290 3093 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:47:38.533499 kubelet[3093]: I0209 09:47:38.533469 3093 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:47:38.533812 kubelet[3093]: I0209 09:47:38.533786 3093 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:47:38.534049 kubelet[3093]: I0209 09:47:38.534025 3093 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:47:38.534201 kubelet[3093]: I0209 09:47:38.534179 3093 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:47:38.534355 kubelet[3093]: I0209 09:47:38.534334 3093 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:47:38.557070 kubelet[3093]: I0209 09:47:38.556923 3093 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:47:38.557287 kubelet[3093]: I0209 09:47:38.557262 3093 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:47:38.557435 kubelet[3093]: I0209 09:47:38.557413 3093 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:47:38.557800 kubelet[3093]: I0209 09:47:38.557774 3093 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:47:38.564000 audit[3093]: AVC avc: denied { mac_admin } for pid=3093 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:47:38.577990 kubelet[3093]: I0209 09:47:38.560802 3093 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:47:38.577990 kubelet[3093]: I0209 09:47:38.561892 3093 server.go:1186] "Started kubelet" Feb 9 09:47:38.577990 kubelet[3093]: I0209 09:47:38.565882 3093 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 09:47:38.577990 kubelet[3093]: I0209 09:47:38.565949 3093 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 09:47:38.577990 kubelet[3093]: I0209 09:47:38.566027 3093 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:47:38.577990 kubelet[3093]: I0209 09:47:38.570729 3093 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:47:38.577990 kubelet[3093]: I0209 09:47:38.571721 3093 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:47:38.577990 kubelet[3093]: I0209 09:47:38.577220 3093 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:47:38.579779 kubelet[3093]: I0209 09:47:38.579727 3093 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:47:38.586718 kernel: audit: type=1400 audit(1707472058.564:222): avc: denied { mac_admin } for pid=3093 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:47:38.586828 kernel: audit: type=1401 audit(1707472058.564:222): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:47:38.564000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:47:38.598548 kernel: audit: type=1300 audit(1707472058.564:222): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000aa55f0 a1=40007e11a0 a2=4000aa55c0 a3=25 items=0 ppid=1 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:38.564000 audit[3093]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000aa55f0 a1=40007e11a0 a2=4000aa55c0 a3=25 items=0 ppid=1 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:38.564000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:47:38.611027 kernel: audit: type=1327 audit(1707472058.564:222): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:47:38.564000 audit[3093]: AVC avc: denied { mac_admin } for pid=3093 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:47:38.564000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:47:38.635316 kernel: audit: type=1400 audit(1707472058.564:223): avc: denied { mac_admin } for pid=3093 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:47:38.635456 kernel: audit: type=1401 audit(1707472058.564:223): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:47:38.635508 kubelet[3093]: E0209 09:47:38.633593 3093 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:47:38.635508 kubelet[3093]: E0209 09:47:38.633645 3093 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:47:38.564000 audit[3093]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000ac0c80 a1=40007e11b8 a2=4000aa5680 a3=25 items=0 ppid=1 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:38.654122 kernel: audit: type=1300 audit(1707472058.564:223): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000ac0c80 a1=40007e11b8 a2=4000aa5680 a3=25 items=0 ppid=1 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:38.564000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:47:38.688347 kernel: audit: type=1327 audit(1707472058.564:223): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:47:38.688457 kubelet[3093]: I0209 09:47:38.686349 3093 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:47:38.729482 kubelet[3093]: I0209 09:47:38.729425 3093 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-62" Feb 9 09:47:38.812754 kubelet[3093]: I0209 09:47:38.812609 3093 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-30-62" Feb 9 09:47:38.816021 kubelet[3093]: I0209 09:47:38.812733 3093 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-30-62" Feb 9 09:47:38.945906 kubelet[3093]: I0209 09:47:38.945871 3093 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:47:38.946199 kubelet[3093]: I0209 09:47:38.946164 3093 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:47:38.946522 kubelet[3093]: I0209 09:47:38.946500 3093 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:47:38.946786 kubelet[3093]: E0209 09:47:38.946766 3093 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:47:39.047608 kubelet[3093]: E0209 09:47:39.047539 3093 kubelet.go:2137] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 09:47:39.125960 kubelet[3093]: I0209 09:47:39.125818 3093 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:47:39.126140 kubelet[3093]: I0209 09:47:39.126116 3093 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:47:39.126277 kubelet[3093]: I0209 09:47:39.126257 3093 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:47:39.126706 kubelet[3093]: I0209 09:47:39.126683 3093 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 09:47:39.126843 kubelet[3093]: I0209 09:47:39.126823 3093 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 09:47:39.126950 kubelet[3093]: I0209 09:47:39.126930 3093 policy_none.go:49] "None policy: Start" Feb 9 09:47:39.128490 kubelet[3093]: I0209 09:47:39.128457 3093 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:47:39.128754 kubelet[3093]: I0209 09:47:39.128730 3093 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:47:39.129154 kubelet[3093]: I0209 09:47:39.129128 3093 state_mem.go:75] "Updated machine memory state" Feb 9 09:47:39.137299 kubelet[3093]: I0209 09:47:39.137266 3093 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:47:39.135000 audit[3093]: AVC avc: denied { mac_admin } for pid=3093 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:47:39.135000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:47:39.135000 audit[3093]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4001150ba0 a1=400115a2e8 a2=4001150b70 a3=25 items=0 ppid=1 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:39.135000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:47:39.138165 kubelet[3093]: I0209 09:47:39.138131 3093 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 09:47:39.139021 kubelet[3093]: I0209 09:47:39.138991 3093 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:47:39.249024 kubelet[3093]: I0209 09:47:39.248964 3093 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:39.249194 kubelet[3093]: I0209 09:47:39.249097 3093 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:39.249194 kubelet[3093]: I0209 09:47:39.249163 3093 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:39.295549 kubelet[3093]: E0209 09:47:39.295496 3093 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-62\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-62" Feb 9 09:47:39.297056 kubelet[3093]: E0209 09:47:39.297019 3093 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-30-62\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-62" Feb 9 09:47:39.297306 kubelet[3093]: I0209 09:47:39.297079 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/76ea3c11550ffdc2a2de0a6cb4c4a353-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-62\" (UID: \"76ea3c11550ffdc2a2de0a6cb4c4a353\") " pod="kube-system/kube-scheduler-ip-172-31-30-62" Feb 9 09:47:39.297472 kubelet[3093]: I0209 09:47:39.297450 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d620f9b5166534473af9b27e0db1bfae-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-62\" (UID: \"d620f9b5166534473af9b27e0db1bfae\") " pod="kube-system/kube-apiserver-ip-172-31-30-62" Feb 9 09:47:39.297647 kubelet[3093]: I0209 09:47:39.297625 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23ca3dcdc4fdaa1a78559bb3d6daa8bd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-62\" (UID: \"23ca3dcdc4fdaa1a78559bb3d6daa8bd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-62" Feb 9 09:47:39.297833 kubelet[3093]: I0209 09:47:39.297812 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d620f9b5166534473af9b27e0db1bfae-ca-certs\") pod \"kube-apiserver-ip-172-31-30-62\" (UID: \"d620f9b5166534473af9b27e0db1bfae\") " pod="kube-system/kube-apiserver-ip-172-31-30-62" Feb 9 09:47:39.297987 kubelet[3093]: I0209 09:47:39.297966 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d620f9b5166534473af9b27e0db1bfae-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-62\" (UID: \"d620f9b5166534473af9b27e0db1bfae\") " pod="kube-system/kube-apiserver-ip-172-31-30-62" Feb 9 09:47:39.298140 kubelet[3093]: I0209 09:47:39.298120 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23ca3dcdc4fdaa1a78559bb3d6daa8bd-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-62\" (UID: \"23ca3dcdc4fdaa1a78559bb3d6daa8bd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-62" Feb 9 09:47:39.298307 kubelet[3093]: I0209 09:47:39.298285 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23ca3dcdc4fdaa1a78559bb3d6daa8bd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-62\" (UID: \"23ca3dcdc4fdaa1a78559bb3d6daa8bd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-62" Feb 9 09:47:39.298460 kubelet[3093]: I0209 09:47:39.298439 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23ca3dcdc4fdaa1a78559bb3d6daa8bd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-62\" (UID: \"23ca3dcdc4fdaa1a78559bb3d6daa8bd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-62" Feb 9 09:47:39.298649 kubelet[3093]: I0209 09:47:39.298620 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23ca3dcdc4fdaa1a78559bb3d6daa8bd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-62\" (UID: \"23ca3dcdc4fdaa1a78559bb3d6daa8bd\") " pod="kube-system/kube-controller-manager-ip-172-31-30-62" Feb 9 09:47:39.299060 kubelet[3093]: E0209 09:47:39.299031 3093 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-62\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-62" Feb 9 09:47:39.623083 kubelet[3093]: I0209 09:47:39.623033 3093 apiserver.go:52] "Watching apiserver" Feb 9 09:47:39.679927 kubelet[3093]: I0209 09:47:39.679885 3093 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:47:39.702497 kubelet[3093]: I0209 09:47:39.702452 3093 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:47:40.017562 kubelet[3093]: E0209 09:47:40.017488 3093 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-62\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-62" Feb 9 09:47:40.020760 kubelet[3093]: E0209 09:47:40.020726 3093 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-62\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-62" Feb 9 09:47:40.165782 kubelet[3093]: E0209 09:47:40.165743 3093 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-30-62\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-62" Feb 9 09:47:41.072273 kubelet[3093]: I0209 09:47:41.072193 3093 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-62" podStartSLOduration=3.07209691 pod.CreationTimestamp="2024-02-09 09:47:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:40.617106202 +0000 UTC m=+2.266242139" watchObservedRunningTime="2024-02-09 09:47:41.07209691 +0000 UTC m=+2.721232871" Feb 9 09:47:41.365612 kubelet[3093]: I0209 09:47:41.365430 3093 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-62" podStartSLOduration=4.365343512 pod.CreationTimestamp="2024-02-09 09:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:41.081994464 +0000 UTC m=+2.731130437" watchObservedRunningTime="2024-02-09 09:47:41.365343512 +0000 UTC m=+3.014479449" Feb 9 09:47:45.949522 sudo[2104]: pam_unix(sudo:session): session closed for user root Feb 9 09:47:45.954611 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 9 09:47:45.954707 kernel: audit: type=1106 audit(1707472065.948:225): pid=2104 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:47:45.948000 audit[2104]: USER_END pid=2104 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:47:45.951000 audit[2104]: CRED_DISP pid=2104 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:47:45.970174 kernel: audit: type=1104 audit(1707472065.951:226): pid=2104 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:47:45.983744 sshd[2100]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:45.984000 audit[2100]: USER_END pid=2100 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:47:45.989386 systemd[1]: sshd@6-172.31.30.62:22-139.178.89.65:47860.service: Deactivated successfully. Feb 9 09:47:45.990830 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:47:46.000221 systemd-logind[1793]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:47:45.984000 audit[2100]: CRED_DISP pid=2100 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:47:46.010518 kernel: audit: type=1106 audit(1707472065.984:227): pid=2100 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:47:46.010660 kernel: audit: type=1104 audit(1707472065.984:228): pid=2100 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:47:45.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.30.62:22-139.178.89.65:47860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:46.019733 kernel: audit: type=1131 audit(1707472065.988:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.30.62:22-139.178.89.65:47860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:46.019913 systemd-logind[1793]: Removed session 7. Feb 9 09:47:46.900702 kubelet[3093]: I0209 09:47:46.900627 3093 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-62" podStartSLOduration=10.900531264 pod.CreationTimestamp="2024-02-09 09:47:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:41.366959084 +0000 UTC m=+3.016095057" watchObservedRunningTime="2024-02-09 09:47:46.900531264 +0000 UTC m=+8.549667225" Feb 9 09:47:51.883964 kubelet[3093]: I0209 09:47:51.883797 3093 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 09:47:51.884803 env[1801]: time="2024-02-09T09:47:51.884483554Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:47:51.885555 kubelet[3093]: I0209 09:47:51.885499 3093 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 09:47:52.536953 kubelet[3093]: I0209 09:47:52.536909 3093 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:52.576890 kubelet[3093]: I0209 09:47:52.576834 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4a722f9-d8f3-4ade-88e2-5a1ebc00f339-kube-proxy\") pod \"kube-proxy-tjdzh\" (UID: \"f4a722f9-d8f3-4ade-88e2-5a1ebc00f339\") " pod="kube-system/kube-proxy-tjdzh" Feb 9 09:47:52.577193 kubelet[3093]: I0209 09:47:52.577167 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4a722f9-d8f3-4ade-88e2-5a1ebc00f339-xtables-lock\") pod \"kube-proxy-tjdzh\" (UID: \"f4a722f9-d8f3-4ade-88e2-5a1ebc00f339\") " pod="kube-system/kube-proxy-tjdzh" Feb 9 09:47:52.577385 kubelet[3093]: I0209 09:47:52.577361 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mslm7\" (UniqueName: \"kubernetes.io/projected/f4a722f9-d8f3-4ade-88e2-5a1ebc00f339-kube-api-access-mslm7\") pod \"kube-proxy-tjdzh\" (UID: \"f4a722f9-d8f3-4ade-88e2-5a1ebc00f339\") " pod="kube-system/kube-proxy-tjdzh" Feb 9 09:47:52.577560 kubelet[3093]: I0209 09:47:52.577539 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4a722f9-d8f3-4ade-88e2-5a1ebc00f339-lib-modules\") pod \"kube-proxy-tjdzh\" (UID: \"f4a722f9-d8f3-4ade-88e2-5a1ebc00f339\") " pod="kube-system/kube-proxy-tjdzh" Feb 9 09:47:52.822455 kubelet[3093]: I0209 09:47:52.822323 3093 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:52.845925 env[1801]: time="2024-02-09T09:47:52.845378278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tjdzh,Uid:f4a722f9-d8f3-4ade-88e2-5a1ebc00f339,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:52.880460 kubelet[3093]: I0209 09:47:52.880302 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b48dcf4a-72c9-49a1-a680-584f4dd1c72e-var-lib-calico\") pod \"tigera-operator-cfc98749c-6sc77\" (UID: \"b48dcf4a-72c9-49a1-a680-584f4dd1c72e\") " pod="tigera-operator/tigera-operator-cfc98749c-6sc77" Feb 9 09:47:52.880460 kubelet[3093]: I0209 09:47:52.880376 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh5xc\" (UniqueName: \"kubernetes.io/projected/b48dcf4a-72c9-49a1-a680-584f4dd1c72e-kube-api-access-rh5xc\") pod \"tigera-operator-cfc98749c-6sc77\" (UID: \"b48dcf4a-72c9-49a1-a680-584f4dd1c72e\") " pod="tigera-operator/tigera-operator-cfc98749c-6sc77" Feb 9 09:47:52.882821 env[1801]: time="2024-02-09T09:47:52.882701011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:52.883088 env[1801]: time="2024-02-09T09:47:52.883031271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:52.883260 env[1801]: time="2024-02-09T09:47:52.883205763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:52.883784 env[1801]: time="2024-02-09T09:47:52.883713587Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c1a7724417cbcd82fd03944ea6eff41ee33528601d48098b670facc34c1bcd7 pid=3198 runtime=io.containerd.runc.v2 Feb 9 09:47:52.975459 env[1801]: time="2024-02-09T09:47:52.975389855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tjdzh,Uid:f4a722f9-d8f3-4ade-88e2-5a1ebc00f339,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c1a7724417cbcd82fd03944ea6eff41ee33528601d48098b670facc34c1bcd7\"" Feb 9 09:47:52.998810 env[1801]: time="2024-02-09T09:47:52.998738334Z" level=info msg="CreateContainer within sandbox \"5c1a7724417cbcd82fd03944ea6eff41ee33528601d48098b670facc34c1bcd7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:47:53.040223 env[1801]: time="2024-02-09T09:47:53.040140470Z" level=info msg="CreateContainer within sandbox \"5c1a7724417cbcd82fd03944ea6eff41ee33528601d48098b670facc34c1bcd7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a5f6596b4721d5fa7bca7627b0227a23039eced8147bfb55d36e0af510154d7a\"" Feb 9 09:47:53.043619 env[1801]: time="2024-02-09T09:47:53.043491991Z" level=info msg="StartContainer for \"a5f6596b4721d5fa7bca7627b0227a23039eced8147bfb55d36e0af510154d7a\"" Feb 9 09:47:53.132358 env[1801]: time="2024-02-09T09:47:53.132217431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-6sc77,Uid:b48dcf4a-72c9-49a1-a680-584f4dd1c72e,Namespace:tigera-operator,Attempt:0,}" Feb 9 09:47:53.197069 env[1801]: time="2024-02-09T09:47:53.197003831Z" level=info msg="StartContainer for \"a5f6596b4721d5fa7bca7627b0227a23039eced8147bfb55d36e0af510154d7a\" returns successfully" Feb 9 09:47:53.276000 audit[3289]: NETFILTER_CFG table=mangle:59 family=2 entries=1 op=nft_register_chain pid=3289 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.276000 audit[3289]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe60b5160 a2=0 a3=ffffb75786c0 items=0 ppid=3252 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.297239 kernel: audit: type=1325 audit(1707472073.276:230): table=mangle:59 family=2 entries=1 op=nft_register_chain pid=3289 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.297479 kernel: audit: type=1300 audit(1707472073.276:230): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe60b5160 a2=0 a3=ffffb75786c0 items=0 ppid=3252 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 09:47:53.304184 kernel: audit: type=1327 audit(1707472073.276:230): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 09:47:53.277000 audit[3291]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_chain pid=3291 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.316330 kernel: audit: type=1325 audit(1707472073.277:231): table=nat:60 family=2 entries=1 op=nft_register_chain pid=3291 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.277000 audit[3291]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4b6c640 a2=0 a3=ffff8da266c0 items=0 ppid=3252 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.333405 kernel: audit: type=1300 audit(1707472073.277:231): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4b6c640 a2=0 a3=ffff8da266c0 items=0 ppid=3252 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.335289 env[1801]: time="2024-02-09T09:47:53.335173723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:53.335700 env[1801]: time="2024-02-09T09:47:53.335642764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:53.336001 env[1801]: time="2024-02-09T09:47:53.335944672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:53.336442 env[1801]: time="2024-02-09T09:47:53.336384355Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/019bedce69f50ba86d451ffc6eb829ffe4cfeb6c7ea73072dc90b6bed1954e29 pid=3301 runtime=io.containerd.runc.v2 Feb 9 09:47:53.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 09:47:53.342603 kernel: audit: type=1327 audit(1707472073.277:231): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 09:47:53.284000 audit[3292]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.348896 kernel: audit: type=1325 audit(1707472073.284:232): table=filter:61 family=2 entries=1 op=nft_register_chain pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.284000 audit[3292]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff51df350 a2=0 a3=ffff800aa6c0 items=0 ppid=3252 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.361529 kernel: audit: type=1300 audit(1707472073.284:232): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff51df350 a2=0 a3=ffff800aa6c0 items=0 ppid=3252 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.284000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 09:47:53.373700 kernel: audit: type=1327 audit(1707472073.284:232): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 09:47:53.389775 kernel: audit: type=1325 audit(1707472073.304:233): table=mangle:62 family=10 entries=1 op=nft_register_chain pid=3290 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.304000 audit[3290]: NETFILTER_CFG table=mangle:62 family=10 entries=1 op=nft_register_chain pid=3290 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.304000 audit[3290]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd258ac40 a2=0 a3=ffffabfa36c0 items=0 ppid=3252 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.304000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 09:47:53.317000 audit[3295]: NETFILTER_CFG table=nat:63 family=10 entries=1 op=nft_register_chain pid=3295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.317000 audit[3295]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9e82310 a2=0 a3=ffffb9a686c0 items=0 ppid=3252 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.317000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 09:47:53.334000 audit[3302]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_chain pid=3302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.334000 audit[3302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1de9070 a2=0 a3=ffffba01e6c0 items=0 ppid=3252 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.334000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 09:47:53.390000 audit[3324]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.390000 audit[3324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd7b1c410 a2=0 a3=ffff936d06c0 items=0 ppid=3252 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.390000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 09:47:53.404000 audit[3331]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3331 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.404000 audit[3331]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe2bc5220 a2=0 a3=ffffbe8b36c0 items=0 ppid=3252 pid=3331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 09:47:53.418000 audit[3334]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3334 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.418000 audit[3334]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffcde21d0 a2=0 a3=ffff84d9f6c0 items=0 ppid=3252 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.418000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 09:47:53.423000 audit[3335]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.423000 audit[3335]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2ed67e0 a2=0 a3=ffffbd5eb6c0 items=0 ppid=3252 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.423000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 09:47:53.430000 audit[3337]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.430000 audit[3337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcab37d40 a2=0 a3=ffff992e56c0 items=0 ppid=3252 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.430000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 09:47:53.433000 audit[3338]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3338 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.433000 audit[3338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2652e90 a2=0 a3=ffffa09f76c0 items=0 ppid=3252 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.433000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 09:47:53.442000 audit[3340]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.442000 audit[3340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffeea3a970 a2=0 a3=ffffa0b5c6c0 items=0 ppid=3252 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.442000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 09:47:53.455000 audit[3348]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.455000 audit[3348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc168f070 a2=0 a3=ffff8d7e06c0 items=0 ppid=3252 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.455000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 09:47:53.461000 audit[3349]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.461000 audit[3349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1cd4180 a2=0 a3=ffffb67bc6c0 items=0 ppid=3252 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.461000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 09:47:53.468709 env[1801]: time="2024-02-09T09:47:53.468653995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-6sc77,Uid:b48dcf4a-72c9-49a1-a680-584f4dd1c72e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"019bedce69f50ba86d451ffc6eb829ffe4cfeb6c7ea73072dc90b6bed1954e29\"" Feb 9 09:47:53.469000 audit[3351]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.469000 audit[3351]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe214d2b0 a2=0 a3=ffff906bf6c0 items=0 ppid=3252 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.469000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 09:47:53.472000 audit[3352]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_chain pid=3352 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.472000 audit[3352]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeae739d0 a2=0 a3=ffff8b25c6c0 items=0 ppid=3252 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.472000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 09:47:53.476232 env[1801]: time="2024-02-09T09:47:53.476182777Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 9 09:47:53.483000 audit[3354]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.483000 audit[3354]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc3608650 a2=0 a3=ffffbad9b6c0 items=0 ppid=3252 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.483000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 09:47:53.491000 audit[3357]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.491000 audit[3357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdb5829a0 a2=0 a3=ffff9377c6c0 items=0 ppid=3252 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.491000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 09:47:53.498000 audit[3360]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=3360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.498000 audit[3360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeaa210f0 a2=0 a3=ffffbe1fe6c0 items=0 ppid=3252 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 09:47:53.500000 audit[3361]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_chain pid=3361 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.500000 audit[3361]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcfa6da30 a2=0 a3=ffffae26a6c0 items=0 ppid=3252 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.500000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 09:47:53.506000 audit[3363]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_rule pid=3363 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.506000 audit[3363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff5b2d2d0 a2=0 a3=ffffb799b6c0 items=0 ppid=3252 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:47:53.513000 audit[3366]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:47:53.513000 audit[3366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff1382ac0 a2=0 a3=ffff8af106c0 items=0 ppid=3252 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.513000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:47:53.536000 audit[3370]: NETFILTER_CFG table=filter:82 family=2 entries=6 op=nft_register_rule pid=3370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:47:53.536000 audit[3370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=fffff24f0d70 a2=0 a3=ffffa92916c0 items=0 ppid=3252 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.536000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:47:53.545000 audit[3370]: NETFILTER_CFG table=nat:83 family=2 entries=17 op=nft_register_chain pid=3370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:47:53.545000 audit[3370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=fffff24f0d70 a2=0 a3=ffffa92916c0 items=0 ppid=3252 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.545000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:47:53.553000 audit[3375]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.553000 audit[3375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe49827f0 a2=0 a3=ffff980226c0 items=0 ppid=3252 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.553000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 09:47:53.558000 audit[3377]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.558000 audit[3377]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc511c580 a2=0 a3=ffffb5fcf6c0 items=0 ppid=3252 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 09:47:53.565000 audit[3380]: NETFILTER_CFG table=filter:86 family=10 entries=2 op=nft_register_chain pid=3380 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.565000 audit[3380]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd7387760 a2=0 a3=ffff812576c0 items=0 ppid=3252 pid=3380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.565000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 09:47:53.570000 audit[3381]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.570000 audit[3381]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd5ba9b0 a2=0 a3=ffff808186c0 items=0 ppid=3252 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.570000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 09:47:53.575000 audit[3383]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.575000 audit[3383]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe8ea9ec0 a2=0 a3=ffffb97066c0 items=0 ppid=3252 pid=3383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.575000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 09:47:53.577000 audit[3384]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.577000 audit[3384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc605e130 a2=0 a3=ffffa9b916c0 items=0 ppid=3252 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.577000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 09:47:53.583000 audit[3386]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.583000 audit[3386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd6d22c80 a2=0 a3=ffffa81106c0 items=0 ppid=3252 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 09:47:53.591000 audit[3389]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.591000 audit[3389]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffdf3806d0 a2=0 a3=ffff8288c6c0 items=0 ppid=3252 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.591000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 09:47:53.593000 audit[3390]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.593000 audit[3390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1413470 a2=0 a3=ffffa71cd6c0 items=0 ppid=3252 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.593000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 09:47:53.610000 audit[3392]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3392 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.610000 audit[3392]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffe5b7790 a2=0 a3=ffff927ee6c0 items=0 ppid=3252 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.610000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 09:47:53.613000 audit[3393]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.613000 audit[3393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe9f4740 a2=0 a3=ffff9ed8b6c0 items=0 ppid=3252 pid=3393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.613000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 09:47:53.618000 audit[3395]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.618000 audit[3395]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe2f544a0 a2=0 a3=ffffbefc56c0 items=0 ppid=3252 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.618000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 09:47:53.627000 audit[3398]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3398 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.627000 audit[3398]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff381d810 a2=0 a3=ffff90c456c0 items=0 ppid=3252 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.627000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 09:47:53.634000 audit[3401]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3401 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.634000 audit[3401]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd471e930 a2=0 a3=ffff84ec76c0 items=0 ppid=3252 pid=3401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.634000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 09:47:53.636000 audit[3402]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.636000 audit[3402]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffa02d780 a2=0 a3=ffffad82d6c0 items=0 ppid=3252 pid=3402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.636000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 09:47:53.641000 audit[3404]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.641000 audit[3404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe79e4f90 a2=0 a3=ffff843b16c0 items=0 ppid=3252 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.641000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:47:53.650000 audit[3407]: NETFILTER_CFG table=nat:100 family=10 entries=2 op=nft_register_chain pid=3407 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:47:53.650000 audit[3407]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffffeeeae30 a2=0 a3=ffffa07656c0 items=0 ppid=3252 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.650000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:47:53.661000 audit[3411]: NETFILTER_CFG table=filter:101 family=10 entries=3 op=nft_register_rule pid=3411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 09:47:53.661000 audit[3411]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd2159940 a2=0 a3=ffff8230d6c0 items=0 ppid=3252 pid=3411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.661000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:47:53.663000 audit[3411]: NETFILTER_CFG table=nat:102 family=10 entries=10 op=nft_register_chain pid=3411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 09:47:53.663000 audit[3411]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffd2159940 a2=0 a3=ffff8230d6c0 items=0 ppid=3252 pid=3411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:53.663000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:47:54.701272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1056702651.mount: Deactivated successfully. Feb 9 09:47:56.173814 env[1801]: time="2024-02-09T09:47:56.173760240Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:56.179078 env[1801]: time="2024-02-09T09:47:56.179029574Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:56.183258 env[1801]: time="2024-02-09T09:47:56.183211008Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:56.186188 env[1801]: time="2024-02-09T09:47:56.186124498Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:56.187740 env[1801]: time="2024-02-09T09:47:56.187676054Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f\"" Feb 9 09:47:56.196780 env[1801]: time="2024-02-09T09:47:56.196728194Z" level=info msg="CreateContainer within sandbox \"019bedce69f50ba86d451ffc6eb829ffe4cfeb6c7ea73072dc90b6bed1954e29\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 9 09:47:56.225208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4087874359.mount: Deactivated successfully. Feb 9 09:47:56.241334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3533723903.mount: Deactivated successfully. Feb 9 09:47:56.242975 env[1801]: time="2024-02-09T09:47:56.242897677Z" level=info msg="CreateContainer within sandbox \"019bedce69f50ba86d451ffc6eb829ffe4cfeb6c7ea73072dc90b6bed1954e29\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"315c98a85a755f3a9e8f19c5ba2869601673768b40be27c8df9b446923999c0f\"" Feb 9 09:47:56.245260 env[1801]: time="2024-02-09T09:47:56.244034594Z" level=info msg="StartContainer for \"315c98a85a755f3a9e8f19c5ba2869601673768b40be27c8df9b446923999c0f\"" Feb 9 09:47:56.362123 env[1801]: time="2024-02-09T09:47:56.358943245Z" level=info msg="StartContainer for \"315c98a85a755f3a9e8f19c5ba2869601673768b40be27c8df9b446923999c0f\" returns successfully" Feb 9 09:47:57.052988 kubelet[3093]: I0209 09:47:57.050925 3093 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tjdzh" podStartSLOduration=5.050834824 pod.CreationTimestamp="2024-02-09 09:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:54.036655041 +0000 UTC m=+15.685791002" watchObservedRunningTime="2024-02-09 09:47:57.050834824 +0000 UTC m=+18.699970773" Feb 9 09:47:57.052988 kubelet[3093]: I0209 09:47:57.051172 3093 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-6sc77" podStartSLOduration=-9.22337203180367e+09 pod.CreationTimestamp="2024-02-09 09:47:52 +0000 UTC" firstStartedPulling="2024-02-09 09:47:53.473070968 +0000 UTC m=+15.122206893" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:57.050434409 +0000 UTC m=+18.699570358" watchObservedRunningTime="2024-02-09 09:47:57.051104979 +0000 UTC m=+18.700240940" Feb 9 09:48:00.139000 audit[3473]: NETFILTER_CFG table=filter:103 family=2 entries=13 op=nft_register_rule pid=3473 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:00.143010 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 9 09:48:00.143078 kernel: audit: type=1325 audit(1707472080.139:274): table=filter:103 family=2 entries=13 op=nft_register_rule pid=3473 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:00.139000 audit[3473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffcea13b70 a2=0 a3=ffff92a2e6c0 items=0 ppid=3252 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:00.160975 kernel: audit: type=1300 audit(1707472080.139:274): arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffcea13b70 a2=0 a3=ffff92a2e6c0 items=0 ppid=3252 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:00.139000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:00.166804 kernel: audit: type=1327 audit(1707472080.139:274): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:00.140000 audit[3473]: NETFILTER_CFG table=nat:104 family=2 entries=20 op=nft_register_rule pid=3473 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:00.175113 kernel: audit: type=1325 audit(1707472080.140:275): table=nat:104 family=2 entries=20 op=nft_register_rule pid=3473 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:00.140000 audit[3473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffcea13b70 a2=0 a3=ffff92a2e6c0 items=0 ppid=3252 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:00.202481 kernel: audit: type=1300 audit(1707472080.140:275): arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffcea13b70 a2=0 a3=ffff92a2e6c0 items=0 ppid=3252 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:00.140000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:00.231311 kernel: audit: type=1327 audit(1707472080.140:275): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:00.265877 kubelet[3093]: I0209 09:48:00.265813 3093 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:00.434000 audit[3499]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=3499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:00.442618 kernel: audit: type=1325 audit(1707472080.434:276): table=filter:105 family=2 entries=14 op=nft_register_rule pid=3499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:00.442786 kernel: audit: type=1300 audit(1707472080.434:276): arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffc3737280 a2=0 a3=ffff8aceb6c0 items=0 ppid=3252 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:00.434000 audit[3499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffc3737280 a2=0 a3=ffff8aceb6c0 items=0 ppid=3252 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:00.454937 kubelet[3093]: I0209 09:48:00.454867 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlhkh\" (UniqueName: \"kubernetes.io/projected/a712ad65-c89e-4d5a-b152-bf02685cc29e-kube-api-access-hlhkh\") pod \"calico-typha-68f5cfc87c-c9r5q\" (UID: \"a712ad65-c89e-4d5a-b152-bf02685cc29e\") " pod="calico-system/calico-typha-68f5cfc87c-c9r5q" Feb 9 09:48:00.455095 kubelet[3093]: I0209 09:48:00.454965 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a712ad65-c89e-4d5a-b152-bf02685cc29e-tigera-ca-bundle\") pod \"calico-typha-68f5cfc87c-c9r5q\" (UID: \"a712ad65-c89e-4d5a-b152-bf02685cc29e\") " pod="calico-system/calico-typha-68f5cfc87c-c9r5q" Feb 9 09:48:00.455095 kubelet[3093]: I0209 09:48:00.455016 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a712ad65-c89e-4d5a-b152-bf02685cc29e-typha-certs\") pod \"calico-typha-68f5cfc87c-c9r5q\" (UID: \"a712ad65-c89e-4d5a-b152-bf02685cc29e\") " pod="calico-system/calico-typha-68f5cfc87c-c9r5q" Feb 9 09:48:00.434000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:00.465704 kernel: audit: type=1327 audit(1707472080.434:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:00.455000 audit[3499]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=3499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:00.471905 kernel: audit: type=1325 audit(1707472080.455:277): table=nat:106 family=2 entries=20 op=nft_register_rule pid=3499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:00.455000 audit[3499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffc3737280 a2=0 a3=ffff8aceb6c0 items=0 ppid=3252 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:00.455000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:00.545459 kubelet[3093]: I0209 09:48:00.545386 3093 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:00.555414 kubelet[3093]: I0209 09:48:00.555351 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98995b7f-a7e1-4998-9646-6e152deb27e4-lib-modules\") pod \"calico-node-dz9b4\" (UID: \"98995b7f-a7e1-4998-9646-6e152deb27e4\") " pod="calico-system/calico-node-dz9b4" Feb 9 09:48:00.555414 kubelet[3093]: I0209 09:48:00.555424 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/98995b7f-a7e1-4998-9646-6e152deb27e4-cni-log-dir\") pod \"calico-node-dz9b4\" (UID: \"98995b7f-a7e1-4998-9646-6e152deb27e4\") " pod="calico-system/calico-node-dz9b4" Feb 9 09:48:00.555674 kubelet[3093]: I0209 09:48:00.555478 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98995b7f-a7e1-4998-9646-6e152deb27e4-tigera-ca-bundle\") pod \"calico-node-dz9b4\" (UID: \"98995b7f-a7e1-4998-9646-6e152deb27e4\") " pod="calico-system/calico-node-dz9b4" Feb 9 09:48:00.555674 kubelet[3093]: I0209 09:48:00.555536 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/98995b7f-a7e1-4998-9646-6e152deb27e4-flexvol-driver-host\") pod \"calico-node-dz9b4\" (UID: \"98995b7f-a7e1-4998-9646-6e152deb27e4\") " pod="calico-system/calico-node-dz9b4" Feb 9 09:48:00.555811 kubelet[3093]: I0209 09:48:00.555678 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/98995b7f-a7e1-4998-9646-6e152deb27e4-var-lib-calico\") pod \"calico-node-dz9b4\" (UID: \"98995b7f-a7e1-4998-9646-6e152deb27e4\") " pod="calico-system/calico-node-dz9b4" Feb 9 09:48:00.555811 kubelet[3093]: I0209 09:48:00.555729 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trxkl\" (UniqueName: \"kubernetes.io/projected/98995b7f-a7e1-4998-9646-6e152deb27e4-kube-api-access-trxkl\") pod \"calico-node-dz9b4\" (UID: \"98995b7f-a7e1-4998-9646-6e152deb27e4\") " pod="calico-system/calico-node-dz9b4" Feb 9 09:48:00.555811 kubelet[3093]: I0209 09:48:00.555776 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98995b7f-a7e1-4998-9646-6e152deb27e4-xtables-lock\") pod \"calico-node-dz9b4\" (UID: \"98995b7f-a7e1-4998-9646-6e152deb27e4\") " pod="calico-system/calico-node-dz9b4" Feb 9 09:48:00.555983 kubelet[3093]: I0209 09:48:00.555823 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/98995b7f-a7e1-4998-9646-6e152deb27e4-policysync\") pod \"calico-node-dz9b4\" (UID: \"98995b7f-a7e1-4998-9646-6e152deb27e4\") " pod="calico-system/calico-node-dz9b4" Feb 9 09:48:00.555983 kubelet[3093]: I0209 09:48:00.555868 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/98995b7f-a7e1-4998-9646-6e152deb27e4-cni-net-dir\") pod \"calico-node-dz9b4\" (UID: \"98995b7f-a7e1-4998-9646-6e152deb27e4\") " pod="calico-system/calico-node-dz9b4" Feb 9 09:48:00.555983 kubelet[3093]: I0209 09:48:00.555910 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/98995b7f-a7e1-4998-9646-6e152deb27e4-var-run-calico\") pod \"calico-node-dz9b4\" (UID: \"98995b7f-a7e1-4998-9646-6e152deb27e4\") " pod="calico-system/calico-node-dz9b4" Feb 9 09:48:00.555983 kubelet[3093]: I0209 09:48:00.555960 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/98995b7f-a7e1-4998-9646-6e152deb27e4-node-certs\") pod \"calico-node-dz9b4\" (UID: \"98995b7f-a7e1-4998-9646-6e152deb27e4\") " pod="calico-system/calico-node-dz9b4" Feb 9 09:48:00.556227 kubelet[3093]: I0209 09:48:00.556023 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/98995b7f-a7e1-4998-9646-6e152deb27e4-cni-bin-dir\") pod \"calico-node-dz9b4\" (UID: \"98995b7f-a7e1-4998-9646-6e152deb27e4\") " pod="calico-system/calico-node-dz9b4" Feb 9 09:48:00.675094 kubelet[3093]: E0209 09:48:00.675059 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.675348 kubelet[3093]: W0209 09:48:00.675302 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.675508 kubelet[3093]: E0209 09:48:00.675487 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.697839 kubelet[3093]: E0209 09:48:00.697508 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.697839 kubelet[3093]: W0209 09:48:00.697545 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.697839 kubelet[3093]: E0209 09:48:00.697628 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.833535 kubelet[3093]: I0209 09:48:00.833474 3093 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:00.833973 kubelet[3093]: E0209 09:48:00.833935 3093 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ptrbz" podUID=b6105262-fb93-4a15-bf14-4f48140174ba Feb 9 09:48:00.858403 kubelet[3093]: E0209 09:48:00.858364 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.858650 kubelet[3093]: W0209 09:48:00.858617 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.858820 kubelet[3093]: E0209 09:48:00.858796 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.860773 kubelet[3093]: E0209 09:48:00.860735 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.860977 kubelet[3093]: W0209 09:48:00.860938 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.861142 kubelet[3093]: E0209 09:48:00.861119 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.861694 kubelet[3093]: E0209 09:48:00.861665 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.861849 kubelet[3093]: W0209 09:48:00.861823 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.861975 kubelet[3093]: E0209 09:48:00.861954 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.863817 kubelet[3093]: E0209 09:48:00.863780 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.864032 kubelet[3093]: W0209 09:48:00.863989 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.864174 kubelet[3093]: E0209 09:48:00.864150 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.864884 kubelet[3093]: E0209 09:48:00.864852 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.865072 kubelet[3093]: W0209 09:48:00.865043 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.865226 kubelet[3093]: E0209 09:48:00.865205 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.866769 kubelet[3093]: E0209 09:48:00.866734 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.866989 kubelet[3093]: W0209 09:48:00.866960 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.867113 kubelet[3093]: E0209 09:48:00.867092 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.868730 kubelet[3093]: E0209 09:48:00.868698 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.868907 kubelet[3093]: W0209 09:48:00.868880 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.869028 kubelet[3093]: E0209 09:48:00.869007 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.870783 kubelet[3093]: E0209 09:48:00.870733 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.870783 kubelet[3093]: W0209 09:48:00.870772 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.870980 kubelet[3093]: E0209 09:48:00.870810 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.872622 kubelet[3093]: E0209 09:48:00.871272 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.872622 kubelet[3093]: W0209 09:48:00.871301 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.872622 kubelet[3093]: E0209 09:48:00.871330 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.872622 kubelet[3093]: E0209 09:48:00.871858 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.872622 kubelet[3093]: W0209 09:48:00.871879 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.872622 kubelet[3093]: E0209 09:48:00.871906 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.872622 kubelet[3093]: E0209 09:48:00.872214 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.872622 kubelet[3093]: W0209 09:48:00.872231 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.872622 kubelet[3093]: E0209 09:48:00.872257 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.872622 kubelet[3093]: E0209 09:48:00.872534 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.873254 kubelet[3093]: W0209 09:48:00.872550 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.873254 kubelet[3093]: E0209 09:48:00.872605 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.874269 kubelet[3093]: E0209 09:48:00.873631 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.874269 kubelet[3093]: W0209 09:48:00.873688 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.874269 kubelet[3093]: E0209 09:48:00.873723 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.874269 kubelet[3093]: E0209 09:48:00.874187 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.874269 kubelet[3093]: W0209 09:48:00.874240 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.874668 env[1801]: time="2024-02-09T09:48:00.873751035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dz9b4,Uid:98995b7f-a7e1-4998-9646-6e152deb27e4,Namespace:calico-system,Attempt:0,}" Feb 9 09:48:00.875299 kubelet[3093]: E0209 09:48:00.874403 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.880426 kubelet[3093]: E0209 09:48:00.880032 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.880426 kubelet[3093]: W0209 09:48:00.880073 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.880426 kubelet[3093]: E0209 09:48:00.880110 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.885459 env[1801]: time="2024-02-09T09:48:00.885366025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68f5cfc87c-c9r5q,Uid:a712ad65-c89e-4d5a-b152-bf02685cc29e,Namespace:calico-system,Attempt:0,}" Feb 9 09:48:00.888044 kubelet[3093]: E0209 09:48:00.887887 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.888044 kubelet[3093]: W0209 09:48:00.887922 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.888044 kubelet[3093]: E0209 09:48:00.887959 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.888044 kubelet[3093]: I0209 09:48:00.888011 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b6105262-fb93-4a15-bf14-4f48140174ba-varrun\") pod \"csi-node-driver-ptrbz\" (UID: \"b6105262-fb93-4a15-bf14-4f48140174ba\") " pod="calico-system/csi-node-driver-ptrbz" Feb 9 09:48:00.889694 kubelet[3093]: E0209 09:48:00.889408 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.889694 kubelet[3093]: W0209 09:48:00.889473 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.889891 kubelet[3093]: E0209 09:48:00.889519 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.889954 kubelet[3093]: I0209 09:48:00.889902 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr5n9\" (UniqueName: \"kubernetes.io/projected/b6105262-fb93-4a15-bf14-4f48140174ba-kube-api-access-qr5n9\") pod \"csi-node-driver-ptrbz\" (UID: \"b6105262-fb93-4a15-bf14-4f48140174ba\") " pod="calico-system/csi-node-driver-ptrbz" Feb 9 09:48:00.890821 kubelet[3093]: E0209 09:48:00.890603 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.890821 kubelet[3093]: W0209 09:48:00.890641 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.890821 kubelet[3093]: E0209 09:48:00.890703 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.894615 kubelet[3093]: E0209 09:48:00.892533 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.894615 kubelet[3093]: W0209 09:48:00.892602 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.894615 kubelet[3093]: E0209 09:48:00.892641 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.897351 kubelet[3093]: E0209 09:48:00.896448 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.897351 kubelet[3093]: W0209 09:48:00.896516 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.897351 kubelet[3093]: E0209 09:48:00.896697 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.897351 kubelet[3093]: I0209 09:48:00.897106 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b6105262-fb93-4a15-bf14-4f48140174ba-socket-dir\") pod \"csi-node-driver-ptrbz\" (UID: \"b6105262-fb93-4a15-bf14-4f48140174ba\") " pod="calico-system/csi-node-driver-ptrbz" Feb 9 09:48:00.897725 kubelet[3093]: E0209 09:48:00.897539 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.897725 kubelet[3093]: W0209 09:48:00.897601 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.897725 kubelet[3093]: E0209 09:48:00.897638 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.899648 kubelet[3093]: E0209 09:48:00.898096 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.899648 kubelet[3093]: W0209 09:48:00.898154 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.899648 kubelet[3093]: E0209 09:48:00.898248 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.899648 kubelet[3093]: E0209 09:48:00.898884 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.899648 kubelet[3093]: W0209 09:48:00.898908 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.899648 kubelet[3093]: E0209 09:48:00.898978 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.899648 kubelet[3093]: I0209 09:48:00.899232 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6105262-fb93-4a15-bf14-4f48140174ba-kubelet-dir\") pod \"csi-node-driver-ptrbz\" (UID: \"b6105262-fb93-4a15-bf14-4f48140174ba\") " pod="calico-system/csi-node-driver-ptrbz" Feb 9 09:48:00.900134 kubelet[3093]: E0209 09:48:00.899728 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.900134 kubelet[3093]: W0209 09:48:00.899751 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.900134 kubelet[3093]: E0209 09:48:00.899821 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.900313 kubelet[3093]: E0209 09:48:00.900259 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.900313 kubelet[3093]: W0209 09:48:00.900277 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.900432 kubelet[3093]: E0209 09:48:00.900301 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.902030 kubelet[3093]: E0209 09:48:00.900948 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.902030 kubelet[3093]: W0209 09:48:00.900980 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.902030 kubelet[3093]: E0209 09:48:00.901019 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.902030 kubelet[3093]: I0209 09:48:00.901063 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b6105262-fb93-4a15-bf14-4f48140174ba-registration-dir\") pod \"csi-node-driver-ptrbz\" (UID: \"b6105262-fb93-4a15-bf14-4f48140174ba\") " pod="calico-system/csi-node-driver-ptrbz" Feb 9 09:48:00.902030 kubelet[3093]: E0209 09:48:00.901413 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.902030 kubelet[3093]: W0209 09:48:00.901433 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.902030 kubelet[3093]: E0209 09:48:00.901466 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.920770 kubelet[3093]: E0209 09:48:00.920708 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.920770 kubelet[3093]: W0209 09:48:00.920749 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.920988 kubelet[3093]: E0209 09:48:00.920788 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.930789 kubelet[3093]: E0209 09:48:00.930730 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.930789 kubelet[3093]: W0209 09:48:00.930768 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.930993 kubelet[3093]: E0209 09:48:00.930806 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.939888 kubelet[3093]: E0209 09:48:00.939591 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:00.939888 kubelet[3093]: W0209 09:48:00.939626 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:00.939888 kubelet[3093]: E0209 09:48:00.939662 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:00.993821 env[1801]: time="2024-02-09T09:48:00.988385189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:00.994027 env[1801]: time="2024-02-09T09:48:00.988462697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:00.994027 env[1801]: time="2024-02-09T09:48:00.988519694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:00.997502 env[1801]: time="2024-02-09T09:48:00.997347764Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd66bf1052ba4ca5bff39135b1dedf90781a267c77743cbe99ac0021a41e1196 pid=3553 runtime=io.containerd.runc.v2 Feb 9 09:48:01.021434 env[1801]: time="2024-02-09T09:48:01.017217415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:01.021434 env[1801]: time="2024-02-09T09:48:01.017346723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:01.021434 env[1801]: time="2024-02-09T09:48:01.017376632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:01.021434 env[1801]: time="2024-02-09T09:48:01.017988512Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6fcb657a4fb42ade0ecf6b227cbc330003985f43e5a97ac48933a4bd9c5f3da pid=3565 runtime=io.containerd.runc.v2 Feb 9 09:48:01.042344 kubelet[3093]: E0209 09:48:01.042288 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.042344 kubelet[3093]: W0209 09:48:01.042321 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.042558 kubelet[3093]: E0209 09:48:01.042357 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.046442 kubelet[3093]: E0209 09:48:01.045452 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.046442 kubelet[3093]: W0209 09:48:01.045498 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.046442 kubelet[3093]: E0209 09:48:01.045547 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.048765 kubelet[3093]: E0209 09:48:01.048717 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.048765 kubelet[3093]: W0209 09:48:01.048755 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.049002 kubelet[3093]: E0209 09:48:01.048983 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.049545 kubelet[3093]: E0209 09:48:01.049395 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.049545 kubelet[3093]: W0209 09:48:01.049425 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.049545 kubelet[3093]: E0209 09:48:01.049499 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.051777 kubelet[3093]: E0209 09:48:01.051723 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.051777 kubelet[3093]: W0209 09:48:01.051762 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.051991 kubelet[3093]: E0209 09:48:01.051973 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.052872 kubelet[3093]: E0209 09:48:01.052831 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.052872 kubelet[3093]: W0209 09:48:01.052863 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.057823 kubelet[3093]: E0209 09:48:01.057774 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.057823 kubelet[3093]: W0209 09:48:01.057812 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.058329 kubelet[3093]: E0209 09:48:01.058217 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.058329 kubelet[3093]: W0209 09:48:01.058245 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.069472 kubelet[3093]: E0209 09:48:01.069375 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.069472 kubelet[3093]: W0209 09:48:01.069415 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.069817 kubelet[3093]: E0209 09:48:01.069602 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.069817 kubelet[3093]: E0209 09:48:01.069702 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.069817 kubelet[3093]: E0209 09:48:01.069731 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.069817 kubelet[3093]: E0209 09:48:01.069759 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.077841 kubelet[3093]: E0209 09:48:01.077778 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.077841 kubelet[3093]: W0209 09:48:01.077814 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.087301 kubelet[3093]: E0209 09:48:01.087249 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.087911 kubelet[3093]: E0209 09:48:01.087724 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.087911 kubelet[3093]: W0209 09:48:01.087757 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.088985 kubelet[3093]: E0209 09:48:01.088778 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.090145 kubelet[3093]: E0209 09:48:01.089739 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.090145 kubelet[3093]: W0209 09:48:01.089769 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.090897 kubelet[3093]: E0209 09:48:01.090395 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.090897 kubelet[3093]: E0209 09:48:01.090525 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.090897 kubelet[3093]: W0209 09:48:01.090555 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.091746 kubelet[3093]: E0209 09:48:01.091198 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.091746 kubelet[3093]: E0209 09:48:01.091322 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.091746 kubelet[3093]: W0209 09:48:01.091339 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.092058 kubelet[3093]: E0209 09:48:01.092004 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.092494 kubelet[3093]: E0209 09:48:01.092239 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.092494 kubelet[3093]: W0209 09:48:01.092261 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.092494 kubelet[3093]: E0209 09:48:01.092440 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.093048 kubelet[3093]: E0209 09:48:01.092882 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.093048 kubelet[3093]: W0209 09:48:01.092903 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.093048 kubelet[3093]: E0209 09:48:01.092965 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.093387 kubelet[3093]: E0209 09:48:01.093354 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.093513 kubelet[3093]: W0209 09:48:01.093490 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.093704 kubelet[3093]: E0209 09:48:01.093662 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.095172 kubelet[3093]: E0209 09:48:01.095141 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.095363 kubelet[3093]: W0209 09:48:01.095336 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.095622 kubelet[3093]: E0209 09:48:01.095530 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.096125 kubelet[3093]: E0209 09:48:01.096091 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.099242 kubelet[3093]: W0209 09:48:01.099178 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.100000 kubelet[3093]: E0209 09:48:01.099969 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.100226 kubelet[3093]: W0209 09:48:01.100197 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.100778 kubelet[3093]: E0209 09:48:01.100756 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.100917 kubelet[3093]: W0209 09:48:01.100894 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.101485 kubelet[3093]: E0209 09:48:01.101457 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.101782 kubelet[3093]: W0209 09:48:01.101754 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.101931 kubelet[3093]: E0209 09:48:01.101909 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.105861 kubelet[3093]: E0209 09:48:01.101676 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.106081 kubelet[3093]: E0209 09:48:01.101699 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.106206 kubelet[3093]: E0209 09:48:01.101711 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.108851 kubelet[3093]: E0209 09:48:01.108817 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.109052 kubelet[3093]: W0209 09:48:01.109024 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.109265 kubelet[3093]: E0209 09:48:01.109230 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.109997 kubelet[3093]: E0209 09:48:01.109730 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.110193 kubelet[3093]: W0209 09:48:01.110160 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.110348 kubelet[3093]: E0209 09:48:01.110327 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.114802 kubelet[3093]: E0209 09:48:01.114767 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.115001 kubelet[3093]: W0209 09:48:01.114975 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.115151 kubelet[3093]: E0209 09:48:01.115130 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.128855 kubelet[3093]: E0209 09:48:01.128821 3093 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:01.129072 kubelet[3093]: W0209 09:48:01.129030 3093 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:01.129247 kubelet[3093]: E0209 09:48:01.129225 3093 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:01.283807 env[1801]: time="2024-02-09T09:48:01.282077050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dz9b4,Uid:98995b7f-a7e1-4998-9646-6e152deb27e4,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd66bf1052ba4ca5bff39135b1dedf90781a267c77743cbe99ac0021a41e1196\"" Feb 9 09:48:01.292016 env[1801]: time="2024-02-09T09:48:01.291924727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 09:48:01.322875 env[1801]: time="2024-02-09T09:48:01.322820143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68f5cfc87c-c9r5q,Uid:a712ad65-c89e-4d5a-b152-bf02685cc29e,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6fcb657a4fb42ade0ecf6b227cbc330003985f43e5a97ac48933a4bd9c5f3da\"" Feb 9 09:48:01.635000 audit[3681]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=3681 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:01.635000 audit[3681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffdaccc8e0 a2=0 a3=ffff9c13d6c0 items=0 ppid=3252 pid=3681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:01.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:01.639000 audit[3681]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=3681 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:01.639000 audit[3681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffdaccc8e0 a2=0 a3=ffff9c13d6c0 items=0 ppid=3252 pid=3681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:01.639000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:02.949802 kubelet[3093]: E0209 09:48:02.949744 3093 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ptrbz" podUID=b6105262-fb93-4a15-bf14-4f48140174ba Feb 9 09:48:03.123587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1190857571.mount: Deactivated successfully. Feb 9 09:48:03.287689 env[1801]: time="2024-02-09T09:48:03.286473336Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:03.290532 env[1801]: time="2024-02-09T09:48:03.290395475Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:03.294380 env[1801]: time="2024-02-09T09:48:03.294328594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:03.297778 env[1801]: time="2024-02-09T09:48:03.297710532Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:03.299988 env[1801]: time="2024-02-09T09:48:03.299861493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57\"" Feb 9 09:48:03.302352 env[1801]: time="2024-02-09T09:48:03.302277959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 9 09:48:03.312972 env[1801]: time="2024-02-09T09:48:03.311203741Z" level=info msg="CreateContainer within sandbox \"bd66bf1052ba4ca5bff39135b1dedf90781a267c77743cbe99ac0021a41e1196\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 09:48:03.359693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4179108386.mount: Deactivated successfully. Feb 9 09:48:03.378619 env[1801]: time="2024-02-09T09:48:03.378284502Z" level=info msg="CreateContainer within sandbox \"bd66bf1052ba4ca5bff39135b1dedf90781a267c77743cbe99ac0021a41e1196\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"472e4d0bdb3d9431964798a3601b288210377b33534db88b44f8e80ffe008e5b\"" Feb 9 09:48:03.379250 env[1801]: time="2024-02-09T09:48:03.379204020Z" level=info msg="StartContainer for \"472e4d0bdb3d9431964798a3601b288210377b33534db88b44f8e80ffe008e5b\"" Feb 9 09:48:03.541547 env[1801]: time="2024-02-09T09:48:03.540811545Z" level=info msg="StartContainer for \"472e4d0bdb3d9431964798a3601b288210377b33534db88b44f8e80ffe008e5b\" returns successfully" Feb 9 09:48:03.929227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-472e4d0bdb3d9431964798a3601b288210377b33534db88b44f8e80ffe008e5b-rootfs.mount: Deactivated successfully. Feb 9 09:48:03.969624 env[1801]: time="2024-02-09T09:48:03.969519170Z" level=info msg="shim disconnected" id=472e4d0bdb3d9431964798a3601b288210377b33534db88b44f8e80ffe008e5b Feb 9 09:48:03.970038 env[1801]: time="2024-02-09T09:48:03.969983183Z" level=warning msg="cleaning up after shim disconnected" id=472e4d0bdb3d9431964798a3601b288210377b33534db88b44f8e80ffe008e5b namespace=k8s.io Feb 9 09:48:03.970194 env[1801]: time="2024-02-09T09:48:03.970165406Z" level=info msg="cleaning up dead shim" Feb 9 09:48:03.987462 env[1801]: time="2024-02-09T09:48:03.987406331Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3726 runtime=io.containerd.runc.v2\n" Feb 9 09:48:04.949276 kubelet[3093]: E0209 09:48:04.949230 3093 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ptrbz" podUID=b6105262-fb93-4a15-bf14-4f48140174ba Feb 9 09:48:05.103190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234567045.mount: Deactivated successfully. Feb 9 09:48:06.229875 env[1801]: time="2024-02-09T09:48:06.229822611Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:06.234074 env[1801]: time="2024-02-09T09:48:06.234025011Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:06.236706 env[1801]: time="2024-02-09T09:48:06.236645649Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:06.239376 env[1801]: time="2024-02-09T09:48:06.239330449Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:06.240622 env[1801]: time="2024-02-09T09:48:06.240515303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969\"" Feb 9 09:48:06.275323 env[1801]: time="2024-02-09T09:48:06.270687189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 09:48:06.295061 env[1801]: time="2024-02-09T09:48:06.294990050Z" level=info msg="CreateContainer within sandbox \"a6fcb657a4fb42ade0ecf6b227cbc330003985f43e5a97ac48933a4bd9c5f3da\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 09:48:06.352340 env[1801]: time="2024-02-09T09:48:06.350315812Z" level=info msg="CreateContainer within sandbox \"a6fcb657a4fb42ade0ecf6b227cbc330003985f43e5a97ac48933a4bd9c5f3da\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2282e9a31234311b927371a8c6b9c101b71dc4a8d67e26e70056e02fbc4be799\"" Feb 9 09:48:06.352561 env[1801]: time="2024-02-09T09:48:06.352445678Z" level=info msg="StartContainer for \"2282e9a31234311b927371a8c6b9c101b71dc4a8d67e26e70056e02fbc4be799\"" Feb 9 09:48:06.539190 env[1801]: time="2024-02-09T09:48:06.539123676Z" level=info msg="StartContainer for \"2282e9a31234311b927371a8c6b9c101b71dc4a8d67e26e70056e02fbc4be799\" returns successfully" Feb 9 09:48:06.949692 kubelet[3093]: E0209 09:48:06.948085 3093 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ptrbz" podUID=b6105262-fb93-4a15-bf14-4f48140174ba Feb 9 09:48:07.255297 systemd[1]: run-containerd-runc-k8s.io-2282e9a31234311b927371a8c6b9c101b71dc4a8d67e26e70056e02fbc4be799-runc.scgykR.mount: Deactivated successfully. Feb 9 09:48:07.697447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240142400.mount: Deactivated successfully. Feb 9 09:48:08.121582 kubelet[3093]: I0209 09:48:08.121406 3093 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-68f5cfc87c-c9r5q" podStartSLOduration=-9.22337202873343e+09 pod.CreationTimestamp="2024-02-09 09:48:00 +0000 UTC" firstStartedPulling="2024-02-09 09:48:01.324837071 +0000 UTC m=+22.973973008" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:07.122867528 +0000 UTC m=+28.772003501" watchObservedRunningTime="2024-02-09 09:48:08.121345763 +0000 UTC m=+29.770481724" Feb 9 09:48:08.405629 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 09:48:08.405797 kernel: audit: type=1325 audit(1707472088.401:280): table=filter:109 family=2 entries=13 op=nft_register_rule pid=3811 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:08.401000 audit[3811]: NETFILTER_CFG table=filter:109 family=2 entries=13 op=nft_register_rule pid=3811 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:08.401000 audit[3811]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffe5c54450 a2=0 a3=ffffae4436c0 items=0 ppid=3252 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:08.423779 kernel: audit: type=1300 audit(1707472088.401:280): arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffe5c54450 a2=0 a3=ffffae4436c0 items=0 ppid=3252 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:08.401000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:08.432119 kernel: audit: type=1327 audit(1707472088.401:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:08.401000 audit[3811]: NETFILTER_CFG table=nat:110 family=2 entries=27 op=nft_register_chain pid=3811 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:08.438285 kernel: audit: type=1325 audit(1707472088.401:281): table=nat:110 family=2 entries=27 op=nft_register_chain pid=3811 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:08.442619 kernel: audit: type=1300 audit(1707472088.401:281): arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffe5c54450 a2=0 a3=ffffae4436c0 items=0 ppid=3252 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:08.401000 audit[3811]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffe5c54450 a2=0 a3=ffffae4436c0 items=0 ppid=3252 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:08.401000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:08.460610 kernel: audit: type=1327 audit(1707472088.401:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:08.949896 kubelet[3093]: E0209 09:48:08.949835 3093 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ptrbz" podUID=b6105262-fb93-4a15-bf14-4f48140174ba Feb 9 09:48:10.948337 kubelet[3093]: E0209 09:48:10.948280 3093 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ptrbz" podUID=b6105262-fb93-4a15-bf14-4f48140174ba Feb 9 09:48:11.693173 env[1801]: time="2024-02-09T09:48:11.693117083Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:11.695973 env[1801]: time="2024-02-09T09:48:11.695924998Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:11.698639 env[1801]: time="2024-02-09T09:48:11.698555061Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:11.701553 env[1801]: time="2024-02-09T09:48:11.701504414Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:11.702682 env[1801]: time="2024-02-09T09:48:11.702636218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b\"" Feb 9 09:48:11.708816 env[1801]: time="2024-02-09T09:48:11.708758260Z" level=info msg="CreateContainer within sandbox \"bd66bf1052ba4ca5bff39135b1dedf90781a267c77743cbe99ac0021a41e1196\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 09:48:11.734089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779979070.mount: Deactivated successfully. Feb 9 09:48:11.743760 env[1801]: time="2024-02-09T09:48:11.743677653Z" level=info msg="CreateContainer within sandbox \"bd66bf1052ba4ca5bff39135b1dedf90781a267c77743cbe99ac0021a41e1196\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"52a6c985a7661d34491843137bce9d43925ffae45fbbc337ed4f655e87e75b9d\"" Feb 9 09:48:11.746546 env[1801]: time="2024-02-09T09:48:11.744827604Z" level=info msg="StartContainer for \"52a6c985a7661d34491843137bce9d43925ffae45fbbc337ed4f655e87e75b9d\"" Feb 9 09:48:11.887098 env[1801]: time="2024-02-09T09:48:11.887034691Z" level=info msg="StartContainer for \"52a6c985a7661d34491843137bce9d43925ffae45fbbc337ed4f655e87e75b9d\" returns successfully" Feb 9 09:48:12.947484 kubelet[3093]: E0209 09:48:12.947432 3093 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ptrbz" podUID=b6105262-fb93-4a15-bf14-4f48140174ba Feb 9 09:48:13.079786 env[1801]: time="2024-02-09T09:48:13.079687345Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:48:13.132867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52a6c985a7661d34491843137bce9d43925ffae45fbbc337ed4f655e87e75b9d-rootfs.mount: Deactivated successfully. Feb 9 09:48:13.143531 kubelet[3093]: I0209 09:48:13.140357 3093 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:48:13.184902 kubelet[3093]: I0209 09:48:13.184822 3093 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:13.198664 kubelet[3093]: I0209 09:48:13.198432 3093 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:13.212634 kubelet[3093]: I0209 09:48:13.209663 3093 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:13.332294 kubelet[3093]: I0209 09:48:13.332245 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a41dab-c1c4-470c-a625-78b48d2cd3c8-config-volume\") pod \"coredns-787d4945fb-zgw6s\" (UID: \"d7a41dab-c1c4-470c-a625-78b48d2cd3c8\") " pod="kube-system/coredns-787d4945fb-zgw6s" Feb 9 09:48:13.332690 kubelet[3093]: I0209 09:48:13.332664 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nh47\" (UniqueName: \"kubernetes.io/projected/ee11eee1-ab84-4220-bd28-7c74f0bfcde8-kube-api-access-5nh47\") pod \"coredns-787d4945fb-gm6gf\" (UID: \"ee11eee1-ab84-4220-bd28-7c74f0bfcde8\") " pod="kube-system/coredns-787d4945fb-gm6gf" Feb 9 09:48:13.332873 kubelet[3093]: I0209 09:48:13.332851 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee11eee1-ab84-4220-bd28-7c74f0bfcde8-config-volume\") pod \"coredns-787d4945fb-gm6gf\" (UID: \"ee11eee1-ab84-4220-bd28-7c74f0bfcde8\") " pod="kube-system/coredns-787d4945fb-gm6gf" Feb 9 09:48:13.333104 kubelet[3093]: I0209 09:48:13.333067 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e000c922-03b1-4fd6-9ba0-d228ef27458c-tigera-ca-bundle\") pod \"calico-kube-controllers-6fcdf54d4d-n9f7w\" (UID: \"e000c922-03b1-4fd6-9ba0-d228ef27458c\") " pod="calico-system/calico-kube-controllers-6fcdf54d4d-n9f7w" Feb 9 09:48:13.333305 kubelet[3093]: I0209 09:48:13.333284 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twcqq\" (UniqueName: \"kubernetes.io/projected/e000c922-03b1-4fd6-9ba0-d228ef27458c-kube-api-access-twcqq\") pod \"calico-kube-controllers-6fcdf54d4d-n9f7w\" (UID: \"e000c922-03b1-4fd6-9ba0-d228ef27458c\") " pod="calico-system/calico-kube-controllers-6fcdf54d4d-n9f7w" Feb 9 09:48:13.333520 kubelet[3093]: I0209 09:48:13.333498 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfrbq\" (UniqueName: \"kubernetes.io/projected/d7a41dab-c1c4-470c-a625-78b48d2cd3c8-kube-api-access-vfrbq\") pod \"coredns-787d4945fb-zgw6s\" (UID: \"d7a41dab-c1c4-470c-a625-78b48d2cd3c8\") " pod="kube-system/coredns-787d4945fb-zgw6s" Feb 9 09:48:13.538430 env[1801]: time="2024-02-09T09:48:13.537993331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-gm6gf,Uid:ee11eee1-ab84-4220-bd28-7c74f0bfcde8,Namespace:kube-system,Attempt:0,}" Feb 9 09:48:13.538430 env[1801]: time="2024-02-09T09:48:13.537991363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-zgw6s,Uid:d7a41dab-c1c4-470c-a625-78b48d2cd3c8,Namespace:kube-system,Attempt:0,}" Feb 9 09:48:13.553404 env[1801]: time="2024-02-09T09:48:13.553338730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fcdf54d4d-n9f7w,Uid:e000c922-03b1-4fd6-9ba0-d228ef27458c,Namespace:calico-system,Attempt:0,}" Feb 9 09:48:14.259892 env[1801]: time="2024-02-09T09:48:14.259830863Z" level=info msg="shim disconnected" id=52a6c985a7661d34491843137bce9d43925ffae45fbbc337ed4f655e87e75b9d Feb 9 09:48:14.260699 env[1801]: time="2024-02-09T09:48:14.260662144Z" level=warning msg="cleaning up after shim disconnected" id=52a6c985a7661d34491843137bce9d43925ffae45fbbc337ed4f655e87e75b9d namespace=k8s.io Feb 9 09:48:14.260841 env[1801]: time="2024-02-09T09:48:14.260814263Z" level=info msg="cleaning up dead shim" Feb 9 09:48:14.315219 env[1801]: time="2024-02-09T09:48:14.315110925Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3886 runtime=io.containerd.runc.v2\n" Feb 9 09:48:14.448121 env[1801]: time="2024-02-09T09:48:14.447977593Z" level=error msg="Failed to destroy network for sandbox \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:14.450341 env[1801]: time="2024-02-09T09:48:14.450269488Z" level=error msg="encountered an error cleaning up failed sandbox \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:14.450643 env[1801]: time="2024-02-09T09:48:14.450552531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fcdf54d4d-n9f7w,Uid:e000c922-03b1-4fd6-9ba0-d228ef27458c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:14.454491 kubelet[3093]: E0209 09:48:14.451221 3093 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:14.454491 kubelet[3093]: E0209 09:48:14.451335 3093 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fcdf54d4d-n9f7w" Feb 9 09:48:14.454491 kubelet[3093]: E0209 09:48:14.451400 3093 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fcdf54d4d-n9f7w" Feb 9 09:48:14.456001 kubelet[3093]: E0209 09:48:14.451503 3093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fcdf54d4d-n9f7w_calico-system(e000c922-03b1-4fd6-9ba0-d228ef27458c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fcdf54d4d-n9f7w_calico-system(e000c922-03b1-4fd6-9ba0-d228ef27458c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fcdf54d4d-n9f7w" podUID=e000c922-03b1-4fd6-9ba0-d228ef27458c Feb 9 09:48:14.456914 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3-shm.mount: Deactivated successfully. Feb 9 09:48:14.460085 env[1801]: time="2024-02-09T09:48:14.460009660Z" level=error msg="Failed to destroy network for sandbox \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:14.461021 env[1801]: time="2024-02-09T09:48:14.460946074Z" level=error msg="encountered an error cleaning up failed sandbox \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:14.461264 env[1801]: time="2024-02-09T09:48:14.461197565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-gm6gf,Uid:ee11eee1-ab84-4220-bd28-7c74f0bfcde8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:14.465019 kubelet[3093]: E0209 09:48:14.461756 3093 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:14.465019 kubelet[3093]: E0209 09:48:14.461874 3093 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-gm6gf" Feb 9 09:48:14.465019 kubelet[3093]: E0209 09:48:14.461916 3093 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-gm6gf" Feb 9 09:48:14.466128 kubelet[3093]: E0209 09:48:14.464849 3093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-gm6gf_kube-system(ee11eee1-ab84-4220-bd28-7c74f0bfcde8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-gm6gf_kube-system(ee11eee1-ab84-4220-bd28-7c74f0bfcde8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-gm6gf" podUID=ee11eee1-ab84-4220-bd28-7c74f0bfcde8 Feb 9 09:48:14.477247 env[1801]: time="2024-02-09T09:48:14.477162152Z" level=error msg="Failed to destroy network for sandbox \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:14.478090 env[1801]: time="2024-02-09T09:48:14.478032906Z" level=error msg="encountered an error cleaning up failed sandbox \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:14.478281 env[1801]: time="2024-02-09T09:48:14.478233919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-zgw6s,Uid:d7a41dab-c1c4-470c-a625-78b48d2cd3c8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:14.480970 kubelet[3093]: E0209 09:48:14.478701 3093 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:14.480970 kubelet[3093]: E0209 09:48:14.478798 3093 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-zgw6s" Feb 9 09:48:14.480970 kubelet[3093]: E0209 09:48:14.478861 3093 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-zgw6s" Feb 9 09:48:14.481795 kubelet[3093]: E0209 09:48:14.481729 3093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-zgw6s_kube-system(d7a41dab-c1c4-470c-a625-78b48d2cd3c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-zgw6s_kube-system(d7a41dab-c1c4-470c-a625-78b48d2cd3c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-zgw6s" podUID=d7a41dab-c1c4-470c-a625-78b48d2cd3c8 Feb 9 09:48:14.957358 env[1801]: time="2024-02-09T09:48:14.956881538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ptrbz,Uid:b6105262-fb93-4a15-bf14-4f48140174ba,Namespace:calico-system,Attempt:0,}" Feb 9 09:48:15.052356 env[1801]: time="2024-02-09T09:48:15.052275891Z" level=error msg="Failed to destroy network for sandbox \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:15.053387 env[1801]: time="2024-02-09T09:48:15.053330794Z" level=error msg="encountered an error cleaning up failed sandbox \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:15.053623 env[1801]: time="2024-02-09T09:48:15.053551285Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ptrbz,Uid:b6105262-fb93-4a15-bf14-4f48140174ba,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:15.054175 kubelet[3093]: E0209 09:48:15.054114 3093 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:15.054303 kubelet[3093]: E0209 09:48:15.054219 3093 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ptrbz" Feb 9 09:48:15.054303 kubelet[3093]: E0209 09:48:15.054283 3093 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ptrbz" Feb 9 09:48:15.055990 kubelet[3093]: E0209 09:48:15.055775 3093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ptrbz_calico-system(b6105262-fb93-4a15-bf14-4f48140174ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ptrbz_calico-system(b6105262-fb93-4a15-bf14-4f48140174ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ptrbz" podUID=b6105262-fb93-4a15-bf14-4f48140174ba Feb 9 09:48:15.108804 kubelet[3093]: I0209 09:48:15.107945 3093 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Feb 9 09:48:15.111403 env[1801]: time="2024-02-09T09:48:15.111322802Z" level=info msg="StopPodSandbox for \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\"" Feb 9 09:48:15.133989 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d-shm.mount: Deactivated successfully. Feb 9 09:48:15.134278 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68-shm.mount: Deactivated successfully. Feb 9 09:48:15.144277 env[1801]: time="2024-02-09T09:48:15.143052267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 09:48:15.156622 kubelet[3093]: I0209 09:48:15.146031 3093 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Feb 9 09:48:15.156791 env[1801]: time="2024-02-09T09:48:15.151215385Z" level=info msg="StopPodSandbox for \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\"" Feb 9 09:48:15.162776 kubelet[3093]: I0209 09:48:15.162725 3093 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Feb 9 09:48:15.165160 env[1801]: time="2024-02-09T09:48:15.165105412Z" level=info msg="StopPodSandbox for \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\"" Feb 9 09:48:15.168601 kubelet[3093]: I0209 09:48:15.168521 3093 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Feb 9 09:48:15.169733 env[1801]: time="2024-02-09T09:48:15.169677266Z" level=info msg="StopPodSandbox for \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\"" Feb 9 09:48:15.244100 env[1801]: time="2024-02-09T09:48:15.243092798Z" level=error msg="StopPodSandbox for \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\" failed" error="failed to destroy network for sandbox \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:15.245001 kubelet[3093]: E0209 09:48:15.244756 3093 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Feb 9 09:48:15.245001 kubelet[3093]: E0209 09:48:15.244844 3093 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d} Feb 9 09:48:15.245001 kubelet[3093]: E0209 09:48:15.244905 3093 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7a41dab-c1c4-470c-a625-78b48d2cd3c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:48:15.245001 kubelet[3093]: E0209 09:48:15.244963 3093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7a41dab-c1c4-470c-a625-78b48d2cd3c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-zgw6s" podUID=d7a41dab-c1c4-470c-a625-78b48d2cd3c8 Feb 9 09:48:15.279588 env[1801]: time="2024-02-09T09:48:15.279500029Z" level=error msg="StopPodSandbox for \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\" failed" error="failed to destroy network for sandbox \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:15.280758 kubelet[3093]: E0209 09:48:15.280712 3093 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Feb 9 09:48:15.280919 kubelet[3093]: E0209 09:48:15.280799 3093 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3} Feb 9 09:48:15.280919 kubelet[3093]: E0209 09:48:15.280887 3093 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e000c922-03b1-4fd6-9ba0-d228ef27458c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:48:15.281108 kubelet[3093]: E0209 09:48:15.280990 3093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e000c922-03b1-4fd6-9ba0-d228ef27458c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fcdf54d4d-n9f7w" podUID=e000c922-03b1-4fd6-9ba0-d228ef27458c Feb 9 09:48:15.281920 env[1801]: time="2024-02-09T09:48:15.281838511Z" level=error msg="StopPodSandbox for \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\" failed" error="failed to destroy network for sandbox \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:15.282318 kubelet[3093]: E0209 09:48:15.282276 3093 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Feb 9 09:48:15.282432 kubelet[3093]: E0209 09:48:15.282361 3093 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68} Feb 9 09:48:15.282503 kubelet[3093]: E0209 09:48:15.282444 3093 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ee11eee1-ab84-4220-bd28-7c74f0bfcde8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:48:15.282695 kubelet[3093]: E0209 09:48:15.282498 3093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ee11eee1-ab84-4220-bd28-7c74f0bfcde8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-gm6gf" podUID=ee11eee1-ab84-4220-bd28-7c74f0bfcde8 Feb 9 09:48:15.285767 env[1801]: time="2024-02-09T09:48:15.285694527Z" level=error msg="StopPodSandbox for \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\" failed" error="failed to destroy network for sandbox \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:48:15.286303 kubelet[3093]: E0209 09:48:15.286243 3093 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Feb 9 09:48:15.286490 kubelet[3093]: E0209 09:48:15.286340 3093 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5} Feb 9 09:48:15.286614 kubelet[3093]: E0209 09:48:15.286555 3093 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6105262-fb93-4a15-bf14-4f48140174ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:48:15.286752 kubelet[3093]: E0209 09:48:15.286647 3093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6105262-fb93-4a15-bf14-4f48140174ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ptrbz" podUID=b6105262-fb93-4a15-bf14-4f48140174ba Feb 9 09:48:22.626688 amazon-ssm-agent[1853]: 2024-02-09 09:48:22 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 09:48:23.924632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551990299.mount: Deactivated successfully. Feb 9 09:48:24.029919 env[1801]: time="2024-02-09T09:48:24.029863920Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:24.034024 env[1801]: time="2024-02-09T09:48:24.033959055Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:24.036708 env[1801]: time="2024-02-09T09:48:24.036661235Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:24.039278 env[1801]: time="2024-02-09T09:48:24.039219614Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:24.040586 env[1801]: time="2024-02-09T09:48:24.040526761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774\"" Feb 9 09:48:24.071387 env[1801]: time="2024-02-09T09:48:24.071317003Z" level=info msg="CreateContainer within sandbox \"bd66bf1052ba4ca5bff39135b1dedf90781a267c77743cbe99ac0021a41e1196\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 09:48:24.096049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3721228062.mount: Deactivated successfully. Feb 9 09:48:24.106824 env[1801]: time="2024-02-09T09:48:24.106724977Z" level=info msg="CreateContainer within sandbox \"bd66bf1052ba4ca5bff39135b1dedf90781a267c77743cbe99ac0021a41e1196\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f32f686d7761da23a9ab57c3c42e55447a31ba7a09ecf86226656ab56f20a996\"" Feb 9 09:48:24.110608 env[1801]: time="2024-02-09T09:48:24.108152689Z" level=info msg="StartContainer for \"f32f686d7761da23a9ab57c3c42e55447a31ba7a09ecf86226656ab56f20a996\"" Feb 9 09:48:24.270017 env[1801]: time="2024-02-09T09:48:24.269955324Z" level=info msg="StartContainer for \"f32f686d7761da23a9ab57c3c42e55447a31ba7a09ecf86226656ab56f20a996\" returns successfully" Feb 9 09:48:24.427343 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 09:48:24.427536 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 09:48:25.247629 kubelet[3093]: I0209 09:48:25.247373 3093 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-dz9b4" podStartSLOduration=-9.223372011607481e+09 pod.CreationTimestamp="2024-02-09 09:48:00 +0000 UTC" firstStartedPulling="2024-02-09 09:48:01.289414953 +0000 UTC m=+22.938550878" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:25.246701149 +0000 UTC m=+46.895837110" watchObservedRunningTime="2024-02-09 09:48:25.247294169 +0000 UTC m=+46.896430118" Feb 9 09:48:25.274289 systemd[1]: run-containerd-runc-k8s.io-f32f686d7761da23a9ab57c3c42e55447a31ba7a09ecf86226656ab56f20a996-runc.i8vq4r.mount: Deactivated successfully. Feb 9 09:48:26.155000 audit[4223]: AVC avc: denied { write } for pid=4223 comm="tee" name="fd" dev="proc" ino=21889 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:48:26.155000 audit[4223]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc7f68985 a2=241 a3=1b6 items=1 ppid=4194 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:26.180860 kernel: audit: type=1400 audit(1707472106.155:282): avc: denied { write } for pid=4223 comm="tee" name="fd" dev="proc" ino=21889 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:48:26.181012 kernel: audit: type=1300 audit(1707472106.155:282): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc7f68985 a2=241 a3=1b6 items=1 ppid=4194 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:26.181627 kernel: audit: type=1307 audit(1707472106.155:282): cwd="/etc/service/enabled/cni/log" Feb 9 09:48:26.155000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 09:48:26.155000 audit: PATH item=0 name="/dev/fd/63" inode=21045 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:26.192927 kernel: audit: type=1302 audit(1707472106.155:282): item=0 name="/dev/fd/63" inode=21045 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:26.155000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:48:26.200203 kernel: audit: type=1327 audit(1707472106.155:282): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:48:26.200353 kernel: audit: type=1400 audit(1707472106.167:283): avc: denied { write } for pid=4227 comm="tee" name="fd" dev="proc" ino=21899 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:48:26.167000 audit[4227]: AVC avc: denied { write } for pid=4227 comm="tee" name="fd" dev="proc" ino=21899 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:48:26.167000 audit[4227]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd3569973 a2=241 a3=1b6 items=1 ppid=4211 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:26.219843 kernel: audit: type=1300 audit(1707472106.167:283): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd3569973 a2=241 a3=1b6 items=1 ppid=4211 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:26.167000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 09:48:26.236857 kernel: audit: type=1307 audit(1707472106.167:283): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 09:48:26.236994 kernel: audit: type=1302 audit(1707472106.167:283): item=0 name="/dev/fd/63" inode=21895 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:26.167000 audit: PATH item=0 name="/dev/fd/63" inode=21895 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:26.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:48:26.258478 kernel: audit: type=1327 audit(1707472106.167:283): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:48:26.175000 audit[4214]: AVC avc: denied { write } for pid=4214 comm="tee" name="fd" dev="proc" ino=21903 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:48:26.175000 audit[4214]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdafe9984 a2=241 a3=1b6 items=1 ppid=4186 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:26.175000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 09:48:26.175000 audit: PATH item=0 name="/dev/fd/63" inode=21878 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:26.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:48:26.198000 audit[4229]: AVC avc: denied { write } for pid=4229 comm="tee" name="fd" dev="proc" ino=21920 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:48:26.198000 audit[4229]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcfc41983 a2=241 a3=1b6 items=1 ppid=4189 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:26.198000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 09:48:26.198000 audit: PATH item=0 name="/dev/fd/63" inode=21896 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:26.198000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:48:26.244000 audit[4245]: AVC avc: denied { write } for pid=4245 comm="tee" name="fd" dev="proc" ino=21933 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:48:26.244000 audit[4245]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc33e0983 a2=241 a3=1b6 items=1 ppid=4193 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:26.244000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 09:48:26.244000 audit: PATH item=0 name="/dev/fd/63" inode=21924 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:26.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:48:26.277000 audit[4247]: AVC avc: denied { write } for pid=4247 comm="tee" name="fd" dev="proc" ino=21060 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:48:26.277000 audit[4247]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe1c4e974 a2=241 a3=1b6 items=1 ppid=4190 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:26.277000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 09:48:26.277000 audit: PATH item=0 name="/dev/fd/63" inode=21925 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:26.277000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:48:26.278000 audit[4254]: AVC avc: denied { write } for pid=4254 comm="tee" name="fd" dev="proc" ino=21064 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:48:26.278000 audit[4254]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc3e85983 a2=241 a3=1b6 items=1 ppid=4199 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:26.278000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 09:48:26.278000 audit: PATH item=0 name="/dev/fd/63" inode=21926 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:26.278000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:48:26.303186 systemd[1]: run-containerd-runc-k8s.io-f32f686d7761da23a9ab57c3c42e55447a31ba7a09ecf86226656ab56f20a996-runc.3yKTIq.mount: Deactivated successfully. Feb 9 09:48:26.949608 env[1801]: time="2024-02-09T09:48:26.948458993Z" level=info msg="StopPodSandbox for \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\"" Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit: BPF prog-id=10 op=LOAD Feb 9 09:48:27.125000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe72d4098 a2=70 a3=0 items=0 ppid=4200 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:27.125000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:48:27.125000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit: BPF prog-id=11 op=LOAD Feb 9 09:48:27.125000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe72d4098 a2=70 a3=4a174c items=0 ppid=4200 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:27.125000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:48:27.125000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:48:27.125000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.125000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffe72d40c8 a2=70 a3=266b73f items=0 ppid=4200 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:27.125000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:48:27.126000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.126000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.126000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.126000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.126000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.126000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.126000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.126000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.126000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.126000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.126000 audit: BPF prog-id=12 op=LOAD Feb 9 09:48:27.126000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe72d4018 a2=70 a3=266b759 items=0 ppid=4200 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:27.126000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:48:27.133056 (udev-worker)[4132]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:48:27.139098 (udev-worker)[4367]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:48:27.142000 audit[4374]: AVC avc: denied { bpf } for pid=4374 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.142000 audit[4374]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffc99e338 a2=70 a3=0 items=0 ppid=4200 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:27.142000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 09:48:27.142000 audit[4374]: AVC avc: denied { bpf } for pid=4374 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:27.142000 audit[4374]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffc99e218 a2=70 a3=2 items=0 ppid=4200 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:27.142000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 09:48:27.159000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:48:27.280010 env[1801]: 2024-02-09 09:48:27.112 [INFO][4348] k8s.go 578: Cleaning up netns ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Feb 9 09:48:27.280010 env[1801]: 2024-02-09 09:48:27.112 [INFO][4348] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" iface="eth0" netns="/var/run/netns/cni-2c432488-8d35-24ea-ee6b-1b8856ea0442" Feb 9 09:48:27.280010 env[1801]: 2024-02-09 09:48:27.121 [INFO][4348] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" iface="eth0" netns="/var/run/netns/cni-2c432488-8d35-24ea-ee6b-1b8856ea0442" Feb 9 09:48:27.280010 env[1801]: 2024-02-09 09:48:27.139 [INFO][4348] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" iface="eth0" netns="/var/run/netns/cni-2c432488-8d35-24ea-ee6b-1b8856ea0442" Feb 9 09:48:27.280010 env[1801]: 2024-02-09 09:48:27.139 [INFO][4348] k8s.go 585: Releasing IP address(es) ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Feb 9 09:48:27.280010 env[1801]: 2024-02-09 09:48:27.139 [INFO][4348] utils.go 188: Calico CNI releasing IP address ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Feb 9 09:48:27.280010 env[1801]: 2024-02-09 09:48:27.250 [INFO][4372] ipam_plugin.go 415: Releasing address using handleID ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" HandleID="k8s-pod-network.95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Workload="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:27.280010 env[1801]: 2024-02-09 09:48:27.250 [INFO][4372] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:27.280010 env[1801]: 2024-02-09 09:48:27.251 [INFO][4372] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:27.280010 env[1801]: 2024-02-09 09:48:27.269 [WARNING][4372] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" HandleID="k8s-pod-network.95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Workload="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:27.280010 env[1801]: 2024-02-09 09:48:27.269 [INFO][4372] ipam_plugin.go 443: Releasing address using workloadID ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" HandleID="k8s-pod-network.95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Workload="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:27.280010 env[1801]: 2024-02-09 09:48:27.273 [INFO][4372] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:27.280010 env[1801]: 2024-02-09 09:48:27.275 [INFO][4348] k8s.go 591: Teardown processing complete. ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Feb 9 09:48:27.282917 env[1801]: time="2024-02-09T09:48:27.282853928Z" level=info msg="TearDown network for sandbox \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\" successfully" Feb 9 09:48:27.283126 env[1801]: time="2024-02-09T09:48:27.283091303Z" level=info msg="StopPodSandbox for \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\" returns successfully" Feb 9 09:48:27.292422 systemd[1]: run-netns-cni\x2d2c432488\x2d8d35\x2d24ea\x2dee6b\x2d1b8856ea0442.mount: Deactivated successfully. Feb 9 09:48:27.294409 env[1801]: time="2024-02-09T09:48:27.294347140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fcdf54d4d-n9f7w,Uid:e000c922-03b1-4fd6-9ba0-d228ef27458c,Namespace:calico-system,Attempt:1,}" Feb 9 09:48:27.317000 audit[4402]: NETFILTER_CFG table=raw:111 family=2 entries=19 op=nft_register_chain pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:48:27.317000 audit[4402]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6132 a0=3 a1=ffffcc4cf1d0 a2=0 a3=ffffbd3e0fa8 items=0 ppid=4200 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:27.317000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:48:27.334000 audit[4404]: NETFILTER_CFG table=mangle:112 family=2 entries=19 op=nft_register_chain pid=4404 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:48:27.334000 audit[4404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6800 a0=3 a1=ffffe6483ea0 a2=0 a3=ffff9da2bfa8 items=0 ppid=4200 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:27.334000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:48:27.342000 audit[4403]: NETFILTER_CFG table=nat:113 family=2 entries=16 op=nft_register_chain pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:48:27.342000 audit[4403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5188 a0=3 a1=ffffc2e9fb90 a2=0 a3=ffffb7d40fa8 items=0 ppid=4200 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:27.342000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:48:27.367000 audit[4414]: NETFILTER_CFG table=filter:114 family=2 entries=39 op=nft_register_chain pid=4414 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:48:27.367000 audit[4414]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18472 a0=3 a1=ffffcdb26440 a2=0 a3=ffff8234afa8 items=0 ppid=4200 pid=4414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:27.367000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:48:27.590765 systemd-networkd[1595]: cali7f54e13fb3e: Link UP Feb 9 09:48:27.593662 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7f54e13fb3e: link becomes ready Feb 9 09:48:27.591987 systemd-networkd[1595]: cali7f54e13fb3e: Gained carrier Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.439 [INFO][4409] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0 calico-kube-controllers-6fcdf54d4d- calico-system e000c922-03b1-4fd6-9ba0-d228ef27458c 678 0 2024-02-09 09:48:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fcdf54d4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-62 calico-kube-controllers-6fcdf54d4d-n9f7w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7f54e13fb3e [] []}} ContainerID="dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" Namespace="calico-system" Pod="calico-kube-controllers-6fcdf54d4d-n9f7w" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-" Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.440 [INFO][4409] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" Namespace="calico-system" Pod="calico-kube-controllers-6fcdf54d4d-n9f7w" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.503 [INFO][4423] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" HandleID="k8s-pod-network.dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" Workload="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.522 [INFO][4423] ipam_plugin.go 268: Auto assigning IP ContainerID="dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" HandleID="k8s-pod-network.dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" Workload="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000507380), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-62", "pod":"calico-kube-controllers-6fcdf54d4d-n9f7w", "timestamp":"2024-02-09 09:48:27.503506042 +0000 UTC"}, Hostname:"ip-172-31-30-62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.523 [INFO][4423] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.523 [INFO][4423] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.523 [INFO][4423] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-62' Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.526 [INFO][4423] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" host="ip-172-31-30-62" Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.533 [INFO][4423] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-62" Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.539 [INFO][4423] ipam.go 489: Trying affinity for 192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.543 [INFO][4423] ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.548 [INFO][4423] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.548 [INFO][4423] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" host="ip-172-31-30-62" Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.550 [INFO][4423] ipam.go 1682: Creating new handle: k8s-pod-network.dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0 Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.557 [INFO][4423] ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" host="ip-172-31-30-62" Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.573 [INFO][4423] ipam.go 1216: Successfully claimed IPs: [192.168.119.193/26] block=192.168.119.192/26 handle="k8s-pod-network.dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" host="ip-172-31-30-62" Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.573 [INFO][4423] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.193/26] handle="k8s-pod-network.dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" host="ip-172-31-30-62" Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.575 [INFO][4423] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:27.627962 env[1801]: 2024-02-09 09:48:27.575 [INFO][4423] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.119.193/26] IPv6=[] ContainerID="dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" HandleID="k8s-pod-network.dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" Workload="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:27.630901 env[1801]: 2024-02-09 09:48:27.581 [INFO][4409] k8s.go 385: Populated endpoint ContainerID="dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" Namespace="calico-system" Pod="calico-kube-controllers-6fcdf54d4d-n9f7w" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0", GenerateName:"calico-kube-controllers-6fcdf54d4d-", Namespace:"calico-system", SelfLink:"", UID:"e000c922-03b1-4fd6-9ba0-d228ef27458c", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fcdf54d4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"", Pod:"calico-kube-controllers-6fcdf54d4d-n9f7w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f54e13fb3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:27.630901 env[1801]: 2024-02-09 09:48:27.581 [INFO][4409] k8s.go 386: Calico CNI using IPs: [192.168.119.193/32] ContainerID="dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" Namespace="calico-system" Pod="calico-kube-controllers-6fcdf54d4d-n9f7w" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:27.630901 env[1801]: 2024-02-09 09:48:27.581 [INFO][4409] dataplane_linux.go 68: Setting the host side veth name to cali7f54e13fb3e ContainerID="dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" Namespace="calico-system" Pod="calico-kube-controllers-6fcdf54d4d-n9f7w" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:27.630901 env[1801]: 2024-02-09 09:48:27.594 [INFO][4409] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" Namespace="calico-system" Pod="calico-kube-controllers-6fcdf54d4d-n9f7w" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:27.630901 env[1801]: 2024-02-09 09:48:27.595 [INFO][4409] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" Namespace="calico-system" Pod="calico-kube-controllers-6fcdf54d4d-n9f7w" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0", GenerateName:"calico-kube-controllers-6fcdf54d4d-", Namespace:"calico-system", SelfLink:"", UID:"e000c922-03b1-4fd6-9ba0-d228ef27458c", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fcdf54d4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0", Pod:"calico-kube-controllers-6fcdf54d4d-n9f7w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f54e13fb3e", MAC:"c6:07:20:e2:ed:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:27.630901 env[1801]: 2024-02-09 09:48:27.620 [INFO][4409] k8s.go 491: Wrote updated endpoint to datastore ContainerID="dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0" Namespace="calico-system" Pod="calico-kube-controllers-6fcdf54d4d-n9f7w" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:27.663256 env[1801]: time="2024-02-09T09:48:27.662886020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:27.663256 env[1801]: time="2024-02-09T09:48:27.662960566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:27.663256 env[1801]: time="2024-02-09T09:48:27.662989410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:27.664000 env[1801]: time="2024-02-09T09:48:27.663908432Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0 pid=4446 runtime=io.containerd.runc.v2 Feb 9 09:48:27.719000 audit[4470]: NETFILTER_CFG table=filter:115 family=2 entries=36 op=nft_register_chain pid=4470 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:48:27.719000 audit[4470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19908 a0=3 a1=ffffd0fceea0 a2=0 a3=ffffa882ffa8 items=0 ppid=4200 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:27.719000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:48:27.804344 env[1801]: time="2024-02-09T09:48:27.804284716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fcdf54d4d-n9f7w,Uid:e000c922-03b1-4fd6-9ba0-d228ef27458c,Namespace:calico-system,Attempt:1,} returns sandbox id \"dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0\"" Feb 9 09:48:27.808664 env[1801]: time="2024-02-09T09:48:27.807658315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 9 09:48:27.861496 systemd-networkd[1595]: vxlan.calico: Link UP Feb 9 09:48:27.861511 systemd-networkd[1595]: vxlan.calico: Gained carrier Feb 9 09:48:28.951656 env[1801]: time="2024-02-09T09:48:28.949828973Z" level=info msg="StopPodSandbox for \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\"" Feb 9 09:48:28.960159 env[1801]: time="2024-02-09T09:48:28.959272408Z" level=info msg="StopPodSandbox for \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\"" Feb 9 09:48:28.955952 systemd-networkd[1595]: vxlan.calico: Gained IPv6LL Feb 9 09:48:29.279811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151578839.mount: Deactivated successfully. Feb 9 09:48:29.348358 env[1801]: 2024-02-09 09:48:29.134 [INFO][4520] k8s.go 578: Cleaning up netns ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Feb 9 09:48:29.348358 env[1801]: 2024-02-09 09:48:29.135 [INFO][4520] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" iface="eth0" netns="/var/run/netns/cni-52fd9bb3-ae1e-ead6-a5d8-368a02b2cd68" Feb 9 09:48:29.348358 env[1801]: 2024-02-09 09:48:29.142 [INFO][4520] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" iface="eth0" netns="/var/run/netns/cni-52fd9bb3-ae1e-ead6-a5d8-368a02b2cd68" Feb 9 09:48:29.348358 env[1801]: 2024-02-09 09:48:29.143 [INFO][4520] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" iface="eth0" netns="/var/run/netns/cni-52fd9bb3-ae1e-ead6-a5d8-368a02b2cd68" Feb 9 09:48:29.348358 env[1801]: 2024-02-09 09:48:29.143 [INFO][4520] k8s.go 585: Releasing IP address(es) ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Feb 9 09:48:29.348358 env[1801]: 2024-02-09 09:48:29.143 [INFO][4520] utils.go 188: Calico CNI releasing IP address ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Feb 9 09:48:29.348358 env[1801]: 2024-02-09 09:48:29.318 [INFO][4533] ipam_plugin.go 415: Releasing address using handleID ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" HandleID="k8s-pod-network.be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Workload="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:29.348358 env[1801]: 2024-02-09 09:48:29.318 [INFO][4533] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:29.348358 env[1801]: 2024-02-09 09:48:29.318 [INFO][4533] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:29.348358 env[1801]: 2024-02-09 09:48:29.333 [WARNING][4533] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" HandleID="k8s-pod-network.be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Workload="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:29.348358 env[1801]: 2024-02-09 09:48:29.333 [INFO][4533] ipam_plugin.go 443: Releasing address using workloadID ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" HandleID="k8s-pod-network.be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Workload="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:29.348358 env[1801]: 2024-02-09 09:48:29.338 [INFO][4533] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:29.348358 env[1801]: 2024-02-09 09:48:29.341 [INFO][4520] k8s.go 591: Teardown processing complete. ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Feb 9 09:48:29.354155 systemd[1]: run-netns-cni\x2d52fd9bb3\x2dae1e\x2dead6\x2da5d8\x2d368a02b2cd68.mount: Deactivated successfully. Feb 9 09:48:29.359196 env[1801]: time="2024-02-09T09:48:29.359106255Z" level=info msg="TearDown network for sandbox \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\" successfully" Feb 9 09:48:29.359825 env[1801]: time="2024-02-09T09:48:29.359746673Z" level=info msg="StopPodSandbox for \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\" returns successfully" Feb 9 09:48:29.361966 env[1801]: time="2024-02-09T09:48:29.361904404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ptrbz,Uid:b6105262-fb93-4a15-bf14-4f48140174ba,Namespace:calico-system,Attempt:1,}" Feb 9 09:48:29.386550 env[1801]: 2024-02-09 09:48:29.140 [INFO][4521] k8s.go 578: Cleaning up netns ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Feb 9 09:48:29.386550 env[1801]: 2024-02-09 09:48:29.140 [INFO][4521] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" iface="eth0" netns="/var/run/netns/cni-333ee3ad-e8d0-14b9-c751-cf7bdeaaf663" Feb 9 09:48:29.386550 env[1801]: 2024-02-09 09:48:29.142 [INFO][4521] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" iface="eth0" netns="/var/run/netns/cni-333ee3ad-e8d0-14b9-c751-cf7bdeaaf663" Feb 9 09:48:29.386550 env[1801]: 2024-02-09 09:48:29.143 [INFO][4521] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" iface="eth0" netns="/var/run/netns/cni-333ee3ad-e8d0-14b9-c751-cf7bdeaaf663" Feb 9 09:48:29.386550 env[1801]: 2024-02-09 09:48:29.143 [INFO][4521] k8s.go 585: Releasing IP address(es) ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Feb 9 09:48:29.386550 env[1801]: 2024-02-09 09:48:29.143 [INFO][4521] utils.go 188: Calico CNI releasing IP address ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Feb 9 09:48:29.386550 env[1801]: 2024-02-09 09:48:29.328 [INFO][4534] ipam_plugin.go 415: Releasing address using handleID ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" HandleID="k8s-pod-network.330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:29.386550 env[1801]: 2024-02-09 09:48:29.328 [INFO][4534] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:29.386550 env[1801]: 2024-02-09 09:48:29.336 [INFO][4534] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:29.386550 env[1801]: 2024-02-09 09:48:29.370 [WARNING][4534] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" HandleID="k8s-pod-network.330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:29.386550 env[1801]: 2024-02-09 09:48:29.370 [INFO][4534] ipam_plugin.go 443: Releasing address using workloadID ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" HandleID="k8s-pod-network.330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:29.386550 env[1801]: 2024-02-09 09:48:29.375 [INFO][4534] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:29.386550 env[1801]: 2024-02-09 09:48:29.380 [INFO][4521] k8s.go 591: Teardown processing complete. ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Feb 9 09:48:29.402024 env[1801]: time="2024-02-09T09:48:29.391952696Z" level=info msg="TearDown network for sandbox \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\" successfully" Feb 9 09:48:29.402024 env[1801]: time="2024-02-09T09:48:29.392143748Z" level=info msg="StopPodSandbox for \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\" returns successfully" Feb 9 09:48:29.397453 systemd[1]: run-netns-cni\x2d333ee3ad\x2de8d0\x2d14b9\x2dc751\x2dcf7bdeaaf663.mount: Deactivated successfully. Feb 9 09:48:29.402354 env[1801]: time="2024-02-09T09:48:29.402138584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-gm6gf,Uid:ee11eee1-ab84-4220-bd28-7c74f0bfcde8,Namespace:kube-system,Attempt:1,}" Feb 9 09:48:29.468196 systemd-networkd[1595]: cali7f54e13fb3e: Gained IPv6LL Feb 9 09:48:29.819392 systemd-networkd[1595]: cali0669725dadc: Link UP Feb 9 09:48:29.824731 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:48:29.825041 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0669725dadc: link becomes ready Feb 9 09:48:29.826128 systemd-networkd[1595]: cali0669725dadc: Gained carrier Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.523 [INFO][4546] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0 csi-node-driver- calico-system b6105262-fb93-4a15-bf14-4f48140174ba 688 0 2024-02-09 09:48:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-30-62 csi-node-driver-ptrbz eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali0669725dadc [] []}} ContainerID="16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" Namespace="calico-system" Pod="csi-node-driver-ptrbz" WorkloadEndpoint="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-" Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.523 [INFO][4546] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" Namespace="calico-system" Pod="csi-node-driver-ptrbz" WorkloadEndpoint="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.702 [INFO][4569] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" HandleID="k8s-pod-network.16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" Workload="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.735 [INFO][4569] ipam_plugin.go 268: Auto assigning IP ContainerID="16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" HandleID="k8s-pod-network.16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" Workload="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000294460), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-62", "pod":"csi-node-driver-ptrbz", "timestamp":"2024-02-09 09:48:29.702167442 +0000 UTC"}, Hostname:"ip-172-31-30-62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.735 [INFO][4569] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.735 [INFO][4569] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.735 [INFO][4569] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-62' Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.740 [INFO][4569] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" host="ip-172-31-30-62" Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.749 [INFO][4569] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-62" Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.762 [INFO][4569] ipam.go 489: Trying affinity for 192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.765 [INFO][4569] ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.769 [INFO][4569] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.769 [INFO][4569] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" host="ip-172-31-30-62" Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.772 [INFO][4569] ipam.go 1682: Creating new handle: k8s-pod-network.16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625 Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.780 [INFO][4569] ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" host="ip-172-31-30-62" Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.794 [INFO][4569] ipam.go 1216: Successfully claimed IPs: [192.168.119.194/26] block=192.168.119.192/26 handle="k8s-pod-network.16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" host="ip-172-31-30-62" Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.794 [INFO][4569] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.194/26] handle="k8s-pod-network.16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" host="ip-172-31-30-62" Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.794 [INFO][4569] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:29.882747 env[1801]: 2024-02-09 09:48:29.794 [INFO][4569] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.119.194/26] IPv6=[] ContainerID="16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" HandleID="k8s-pod-network.16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" Workload="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:29.885021 env[1801]: 2024-02-09 09:48:29.801 [INFO][4546] k8s.go 385: Populated endpoint ContainerID="16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" Namespace="calico-system" Pod="csi-node-driver-ptrbz" WorkloadEndpoint="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6105262-fb93-4a15-bf14-4f48140174ba", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"", Pod:"csi-node-driver-ptrbz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0669725dadc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:29.885021 env[1801]: 2024-02-09 09:48:29.803 [INFO][4546] k8s.go 386: Calico CNI using IPs: [192.168.119.194/32] ContainerID="16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" Namespace="calico-system" Pod="csi-node-driver-ptrbz" WorkloadEndpoint="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:29.885021 env[1801]: 2024-02-09 09:48:29.803 [INFO][4546] dataplane_linux.go 68: Setting the host side veth name to cali0669725dadc ContainerID="16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" Namespace="calico-system" Pod="csi-node-driver-ptrbz" WorkloadEndpoint="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:29.885021 env[1801]: 2024-02-09 09:48:29.828 [INFO][4546] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" Namespace="calico-system" Pod="csi-node-driver-ptrbz" WorkloadEndpoint="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:29.885021 env[1801]: 2024-02-09 09:48:29.829 [INFO][4546] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" Namespace="calico-system" Pod="csi-node-driver-ptrbz" WorkloadEndpoint="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6105262-fb93-4a15-bf14-4f48140174ba", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625", Pod:"csi-node-driver-ptrbz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0669725dadc", MAC:"82:58:51:ab:9d:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:29.885021 env[1801]: 2024-02-09 09:48:29.854 [INFO][4546] k8s.go 491: Wrote updated endpoint to datastore ContainerID="16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625" Namespace="calico-system" Pod="csi-node-driver-ptrbz" WorkloadEndpoint="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:29.895189 systemd-networkd[1595]: cali9442450294f: Link UP Feb 9 09:48:29.901257 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9442450294f: link becomes ready Feb 9 09:48:29.900940 systemd-networkd[1595]: cali9442450294f: Gained carrier Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.586 [INFO][4556] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0 coredns-787d4945fb- kube-system ee11eee1-ab84-4220-bd28-7c74f0bfcde8 689 0 2024-02-09 09:47:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-62 coredns-787d4945fb-gm6gf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9442450294f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" Namespace="kube-system" Pod="coredns-787d4945fb-gm6gf" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-" Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.587 [INFO][4556] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" Namespace="kube-system" Pod="coredns-787d4945fb-gm6gf" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.722 [INFO][4575] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" HandleID="k8s-pod-network.449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.756 [INFO][4575] ipam_plugin.go 268: Auto assigning IP ContainerID="449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" HandleID="k8s-pod-network.449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028b730), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-62", "pod":"coredns-787d4945fb-gm6gf", "timestamp":"2024-02-09 09:48:29.722624971 +0000 UTC"}, Hostname:"ip-172-31-30-62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.762 [INFO][4575] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.794 [INFO][4575] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.794 [INFO][4575] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-62' Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.797 [INFO][4575] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" host="ip-172-31-30-62" Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.804 [INFO][4575] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-62" Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.816 [INFO][4575] ipam.go 489: Trying affinity for 192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.821 [INFO][4575] ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.829 [INFO][4575] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.830 [INFO][4575] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" host="ip-172-31-30-62" Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.832 [INFO][4575] ipam.go 1682: Creating new handle: k8s-pod-network.449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.839 [INFO][4575] ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" host="ip-172-31-30-62" Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.870 [INFO][4575] ipam.go 1216: Successfully claimed IPs: [192.168.119.195/26] block=192.168.119.192/26 handle="k8s-pod-network.449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" host="ip-172-31-30-62" Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.871 [INFO][4575] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.195/26] handle="k8s-pod-network.449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" host="ip-172-31-30-62" Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.871 [INFO][4575] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:29.943850 env[1801]: 2024-02-09 09:48:29.872 [INFO][4575] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.119.195/26] IPv6=[] ContainerID="449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" HandleID="k8s-pod-network.449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:29.946701 env[1801]: 2024-02-09 09:48:29.886 [INFO][4556] k8s.go 385: Populated endpoint ContainerID="449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" Namespace="kube-system" Pod="coredns-787d4945fb-gm6gf" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"ee11eee1-ab84-4220-bd28-7c74f0bfcde8", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"", Pod:"coredns-787d4945fb-gm6gf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9442450294f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:29.946701 env[1801]: 2024-02-09 09:48:29.887 [INFO][4556] k8s.go 386: Calico CNI using IPs: [192.168.119.195/32] ContainerID="449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" Namespace="kube-system" Pod="coredns-787d4945fb-gm6gf" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:29.946701 env[1801]: 2024-02-09 09:48:29.887 [INFO][4556] dataplane_linux.go 68: Setting the host side veth name to cali9442450294f ContainerID="449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" Namespace="kube-system" Pod="coredns-787d4945fb-gm6gf" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:29.946701 env[1801]: 2024-02-09 09:48:29.908 [INFO][4556] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" Namespace="kube-system" Pod="coredns-787d4945fb-gm6gf" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:29.946701 env[1801]: 2024-02-09 09:48:29.913 [INFO][4556] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" Namespace="kube-system" Pod="coredns-787d4945fb-gm6gf" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"ee11eee1-ab84-4220-bd28-7c74f0bfcde8", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c", Pod:"coredns-787d4945fb-gm6gf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9442450294f", MAC:"e6:10:df:44:f9:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:29.946701 env[1801]: 2024-02-09 09:48:29.932 [INFO][4556] k8s.go 491: Wrote updated endpoint to datastore ContainerID="449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c" Namespace="kube-system" Pod="coredns-787d4945fb-gm6gf" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:29.970638 env[1801]: time="2024-02-09T09:48:29.950059924Z" level=info msg="StopPodSandbox for \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\"" Feb 9 09:48:29.987000 audit[4594]: NETFILTER_CFG table=filter:116 family=2 entries=34 op=nft_register_chain pid=4594 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:48:29.987000 audit[4594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18320 a0=3 a1=ffffcd2199d0 a2=0 a3=ffffb4b55fa8 items=0 ppid=4200 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:29.987000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:48:30.045214 env[1801]: time="2024-02-09T09:48:30.045099500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:30.045494 env[1801]: time="2024-02-09T09:48:30.045444050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:30.045719 env[1801]: time="2024-02-09T09:48:30.045669802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:30.046280 env[1801]: time="2024-02-09T09:48:30.046162450Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625 pid=4638 runtime=io.containerd.runc.v2 Feb 9 09:48:30.050000 audit[4649]: NETFILTER_CFG table=filter:117 family=2 entries=44 op=nft_register_chain pid=4649 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:48:30.050000 audit[4649]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22284 a0=3 a1=ffffd6314790 a2=0 a3=ffffa31b9fa8 items=0 ppid=4200 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:30.050000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:48:30.069118 env[1801]: time="2024-02-09T09:48:30.068300607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:30.069118 env[1801]: time="2024-02-09T09:48:30.068395179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:30.069118 env[1801]: time="2024-02-09T09:48:30.068421852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:30.069118 env[1801]: time="2024-02-09T09:48:30.068762310Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c pid=4656 runtime=io.containerd.runc.v2 Feb 9 09:48:30.302689 env[1801]: time="2024-02-09T09:48:30.302632107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ptrbz,Uid:b6105262-fb93-4a15-bf14-4f48140174ba,Namespace:calico-system,Attempt:1,} returns sandbox id \"16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625\"" Feb 9 09:48:30.320867 env[1801]: time="2024-02-09T09:48:30.320809297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-gm6gf,Uid:ee11eee1-ab84-4220-bd28-7c74f0bfcde8,Namespace:kube-system,Attempt:1,} returns sandbox id \"449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c\"" Feb 9 09:48:30.330388 env[1801]: time="2024-02-09T09:48:30.330331740Z" level=info msg="CreateContainer within sandbox \"449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:48:30.361646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519787930.mount: Deactivated successfully. Feb 9 09:48:30.377760 env[1801]: time="2024-02-09T09:48:30.377671063Z" level=info msg="CreateContainer within sandbox \"449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03fa6978269ef46194ef756bd1cb4355acdb7960806734ece594b014f6d08816\"" Feb 9 09:48:30.382395 env[1801]: time="2024-02-09T09:48:30.381781089Z" level=info msg="StartContainer for \"03fa6978269ef46194ef756bd1cb4355acdb7960806734ece594b014f6d08816\"" Feb 9 09:48:30.429278 env[1801]: 2024-02-09 09:48:30.217 [INFO][4636] k8s.go 578: Cleaning up netns ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Feb 9 09:48:30.429278 env[1801]: 2024-02-09 09:48:30.218 [INFO][4636] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" iface="eth0" netns="/var/run/netns/cni-bd09a6a7-d977-f621-c09d-b586c34cf896" Feb 9 09:48:30.429278 env[1801]: 2024-02-09 09:48:30.218 [INFO][4636] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" iface="eth0" netns="/var/run/netns/cni-bd09a6a7-d977-f621-c09d-b586c34cf896" Feb 9 09:48:30.429278 env[1801]: 2024-02-09 09:48:30.219 [INFO][4636] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" iface="eth0" netns="/var/run/netns/cni-bd09a6a7-d977-f621-c09d-b586c34cf896" Feb 9 09:48:30.429278 env[1801]: 2024-02-09 09:48:30.219 [INFO][4636] k8s.go 585: Releasing IP address(es) ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Feb 9 09:48:30.429278 env[1801]: 2024-02-09 09:48:30.219 [INFO][4636] utils.go 188: Calico CNI releasing IP address ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Feb 9 09:48:30.429278 env[1801]: 2024-02-09 09:48:30.386 [INFO][4702] ipam_plugin.go 415: Releasing address using handleID ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" HandleID="k8s-pod-network.7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:30.429278 env[1801]: 2024-02-09 09:48:30.387 [INFO][4702] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:30.429278 env[1801]: 2024-02-09 09:48:30.387 [INFO][4702] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:30.429278 env[1801]: 2024-02-09 09:48:30.412 [WARNING][4702] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" HandleID="k8s-pod-network.7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:30.429278 env[1801]: 2024-02-09 09:48:30.412 [INFO][4702] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" HandleID="k8s-pod-network.7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:30.429278 env[1801]: 2024-02-09 09:48:30.423 [INFO][4702] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:30.429278 env[1801]: 2024-02-09 09:48:30.426 [INFO][4636] k8s.go 591: Teardown processing complete. ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Feb 9 09:48:30.430345 env[1801]: time="2024-02-09T09:48:30.429608488Z" level=info msg="TearDown network for sandbox \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\" successfully" Feb 9 09:48:30.430345 env[1801]: time="2024-02-09T09:48:30.429673088Z" level=info msg="StopPodSandbox for \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\" returns successfully" Feb 9 09:48:30.431328 env[1801]: time="2024-02-09T09:48:30.431270957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-zgw6s,Uid:d7a41dab-c1c4-470c-a625-78b48d2cd3c8,Namespace:kube-system,Attempt:1,}" Feb 9 09:48:30.565685 env[1801]: time="2024-02-09T09:48:30.564041417Z" level=info msg="StartContainer for \"03fa6978269ef46194ef756bd1cb4355acdb7960806734ece594b014f6d08816\" returns successfully" Feb 9 09:48:30.770805 systemd-networkd[1595]: cali81bfaa31879: Link UP Feb 9 09:48:30.776502 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali81bfaa31879: link becomes ready Feb 9 09:48:30.776633 systemd-networkd[1595]: cali81bfaa31879: Gained carrier Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.555 [INFO][4745] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0 coredns-787d4945fb- kube-system d7a41dab-c1c4-470c-a625-78b48d2cd3c8 701 0 2024-02-09 09:47:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-62 coredns-787d4945fb-zgw6s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali81bfaa31879 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" Namespace="kube-system" Pod="coredns-787d4945fb-zgw6s" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-" Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.555 [INFO][4745] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" Namespace="kube-system" Pod="coredns-787d4945fb-zgw6s" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.657 [INFO][4773] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" HandleID="k8s-pod-network.01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.677 [INFO][4773] ipam_plugin.go 268: Auto assigning IP ContainerID="01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" HandleID="k8s-pod-network.01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d950), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-62", "pod":"coredns-787d4945fb-zgw6s", "timestamp":"2024-02-09 09:48:30.657932899 +0000 UTC"}, Hostname:"ip-172-31-30-62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.677 [INFO][4773] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.678 [INFO][4773] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.678 [INFO][4773] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-62' Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.680 [INFO][4773] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" host="ip-172-31-30-62" Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.688 [INFO][4773] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-62" Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.695 [INFO][4773] ipam.go 489: Trying affinity for 192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.699 [INFO][4773] ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.703 [INFO][4773] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.703 [INFO][4773] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" host="ip-172-31-30-62" Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.706 [INFO][4773] ipam.go 1682: Creating new handle: k8s-pod-network.01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.717 [INFO][4773] ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" host="ip-172-31-30-62" Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.728 [INFO][4773] ipam.go 1216: Successfully claimed IPs: [192.168.119.196/26] block=192.168.119.192/26 handle="k8s-pod-network.01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" host="ip-172-31-30-62" Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.728 [INFO][4773] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.196/26] handle="k8s-pod-network.01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" host="ip-172-31-30-62" Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.728 [INFO][4773] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:30.803444 env[1801]: 2024-02-09 09:48:30.728 [INFO][4773] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.119.196/26] IPv6=[] ContainerID="01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" HandleID="k8s-pod-network.01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:30.804750 env[1801]: 2024-02-09 09:48:30.733 [INFO][4745] k8s.go 385: Populated endpoint ContainerID="01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" Namespace="kube-system" Pod="coredns-787d4945fb-zgw6s" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"d7a41dab-c1c4-470c-a625-78b48d2cd3c8", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"", Pod:"coredns-787d4945fb-zgw6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81bfaa31879", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:30.804750 env[1801]: 2024-02-09 09:48:30.733 [INFO][4745] k8s.go 386: Calico CNI using IPs: [192.168.119.196/32] ContainerID="01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" Namespace="kube-system" Pod="coredns-787d4945fb-zgw6s" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:30.804750 env[1801]: 2024-02-09 09:48:30.733 [INFO][4745] dataplane_linux.go 68: Setting the host side veth name to cali81bfaa31879 ContainerID="01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" Namespace="kube-system" Pod="coredns-787d4945fb-zgw6s" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:30.804750 env[1801]: 2024-02-09 09:48:30.778 [INFO][4745] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" Namespace="kube-system" Pod="coredns-787d4945fb-zgw6s" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:30.804750 env[1801]: 2024-02-09 09:48:30.779 [INFO][4745] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" Namespace="kube-system" Pod="coredns-787d4945fb-zgw6s" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"d7a41dab-c1c4-470c-a625-78b48d2cd3c8", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a", Pod:"coredns-787d4945fb-zgw6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81bfaa31879", MAC:"16:10:c7:1c:6b:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:30.804750 env[1801]: 2024-02-09 09:48:30.796 [INFO][4745] k8s.go 491: Wrote updated endpoint to datastore ContainerID="01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a" Namespace="kube-system" Pod="coredns-787d4945fb-zgw6s" WorkloadEndpoint="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:30.843000 audit[4797]: NETFILTER_CFG table=filter:118 family=2 entries=38 op=nft_register_chain pid=4797 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:48:30.843000 audit[4797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19088 a0=3 a1=ffffc331dbf0 a2=0 a3=ffffb1021fa8 items=0 ppid=4200 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:30.843000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:48:30.886903 env[1801]: time="2024-02-09T09:48:30.886804612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:30.887090 env[1801]: time="2024-02-09T09:48:30.886877599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:30.887090 env[1801]: time="2024-02-09T09:48:30.886973671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:30.887352 env[1801]: time="2024-02-09T09:48:30.887225340Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a pid=4804 runtime=io.containerd.runc.v2 Feb 9 09:48:31.014478 env[1801]: time="2024-02-09T09:48:31.014422445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-zgw6s,Uid:d7a41dab-c1c4-470c-a625-78b48d2cd3c8,Namespace:kube-system,Attempt:1,} returns sandbox id \"01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a\"" Feb 9 09:48:31.023163 env[1801]: time="2024-02-09T09:48:31.023107994Z" level=info msg="CreateContainer within sandbox \"01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:48:31.048190 env[1801]: time="2024-02-09T09:48:31.048129439Z" level=info msg="CreateContainer within sandbox \"01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f482ec5bf3b6eceeb0a45e02bbe8849c04d91555b4708c0472d38e186a20b19d\"" Feb 9 09:48:31.051594 env[1801]: time="2024-02-09T09:48:31.051513154Z" level=info msg="StartContainer for \"f482ec5bf3b6eceeb0a45e02bbe8849c04d91555b4708c0472d38e186a20b19d\"" Feb 9 09:48:31.277849 systemd[1]: run-netns-cni\x2dbd09a6a7\x2dd977\x2df621\x2dc09d\x2db586c34cf896.mount: Deactivated successfully. Feb 9 09:48:31.326522 env[1801]: time="2024-02-09T09:48:31.326457654Z" level=info msg="StartContainer for \"f482ec5bf3b6eceeb0a45e02bbe8849c04d91555b4708c0472d38e186a20b19d\" returns successfully" Feb 9 09:48:31.346840 kubelet[3093]: I0209 09:48:31.346771 3093 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-gm6gf" podStartSLOduration=39.346688477 pod.CreationTimestamp="2024-02-09 09:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:31.319662437 +0000 UTC m=+52.968798410" watchObservedRunningTime="2024-02-09 09:48:31.346688477 +0000 UTC m=+52.995824438" Feb 9 09:48:31.450966 systemd-networkd[1595]: cali9442450294f: Gained IPv6LL Feb 9 09:48:31.763835 kernel: kauditd_printk_skb: 98 callbacks suppressed Feb 9 09:48:31.764042 kernel: audit: type=1325 audit(1707472111.759:306): table=filter:119 family=2 entries=12 op=nft_register_rule pid=4914 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:31.759000 audit[4914]: NETFILTER_CFG table=filter:119 family=2 entries=12 op=nft_register_rule pid=4914 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:31.759000 audit[4914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffec255320 a2=0 a3=ffffafb4c6c0 items=0 ppid=3252 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:31.782795 kernel: audit: type=1300 audit(1707472111.759:306): arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffec255320 a2=0 a3=ffffafb4c6c0 items=0 ppid=3252 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:31.829627 kernel: audit: type=1327 audit(1707472111.759:306): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:31.759000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:31.834786 systemd-networkd[1595]: cali0669725dadc: Gained IPv6LL Feb 9 09:48:31.853274 kernel: audit: type=1325 audit(1707472111.787:307): table=nat:120 family=2 entries=30 op=nft_register_rule pid=4914 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:31.787000 audit[4914]: NETFILTER_CFG table=nat:120 family=2 entries=30 op=nft_register_rule pid=4914 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:31.873662 kernel: audit: type=1300 audit(1707472111.787:307): arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffec255320 a2=0 a3=ffffafb4c6c0 items=0 ppid=3252 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:31.787000 audit[4914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffec255320 a2=0 a3=ffffafb4c6c0 items=0 ppid=3252 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:31.902642 kernel: audit: type=1327 audit(1707472111.787:307): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:31.787000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:32.090920 systemd-networkd[1595]: cali81bfaa31879: Gained IPv6LL Feb 9 09:48:32.252024 env[1801]: time="2024-02-09T09:48:32.251965647Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:32.251000 audit[4946]: NETFILTER_CFG table=filter:121 family=2 entries=9 op=nft_register_rule pid=4946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:32.251000 audit[4946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffff5026390 a2=0 a3=ffffbb15c6c0 items=0 ppid=3252 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:32.272816 kernel: audit: type=1325 audit(1707472112.251:308): table=filter:121 family=2 entries=9 op=nft_register_rule pid=4946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:32.272931 kernel: audit: type=1300 audit(1707472112.251:308): arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffff5026390 a2=0 a3=ffffbb15c6c0 items=0 ppid=3252 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:32.251000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:32.279765 kernel: audit: type=1327 audit(1707472112.251:308): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:32.281195 env[1801]: time="2024-02-09T09:48:32.281125219Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:32.286342 env[1801]: time="2024-02-09T09:48:32.286267190Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:32.293087 env[1801]: time="2024-02-09T09:48:32.293021028Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:32.295000 audit[4946]: NETFILTER_CFG table=nat:122 family=2 entries=51 op=nft_register_chain pid=4946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:32.302659 kernel: audit: type=1325 audit(1707472112.295:309): table=nat:122 family=2 entries=51 op=nft_register_chain pid=4946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:32.295000 audit[4946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19324 a0=3 a1=fffff5026390 a2=0 a3=ffffbb15c6c0 items=0 ppid=3252 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:32.295000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:32.308219 env[1801]: time="2024-02-09T09:48:32.303982488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8\"" Feb 9 09:48:32.309244 env[1801]: time="2024-02-09T09:48:32.308950995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 09:48:32.369996 kubelet[3093]: I0209 09:48:32.369831 3093 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-zgw6s" podStartSLOduration=40.369771131 pod.CreationTimestamp="2024-02-09 09:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:32.337265092 +0000 UTC m=+53.986401041" watchObservedRunningTime="2024-02-09 09:48:32.369771131 +0000 UTC m=+54.018907068" Feb 9 09:48:32.383170 env[1801]: time="2024-02-09T09:48:32.383073076Z" level=info msg="CreateContainer within sandbox \"dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 9 09:48:32.422401 env[1801]: time="2024-02-09T09:48:32.422298179Z" level=info msg="CreateContainer within sandbox \"dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"29132a5ada4c7efd8b06cf8e8587afd089835cf7869a279a10998ceb12a16738\"" Feb 9 09:48:32.423630 env[1801]: time="2024-02-09T09:48:32.423526795Z" level=info msg="StartContainer for \"29132a5ada4c7efd8b06cf8e8587afd089835cf7869a279a10998ceb12a16738\"" Feb 9 09:48:32.703654 env[1801]: time="2024-02-09T09:48:32.703499379Z" level=info msg="StartContainer for \"29132a5ada4c7efd8b06cf8e8587afd089835cf7869a279a10998ceb12a16738\" returns successfully" Feb 9 09:48:32.734000 audit[5012]: NETFILTER_CFG table=filter:123 family=2 entries=6 op=nft_register_rule pid=5012 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:32.734000 audit[5012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffe8816580 a2=0 a3=ffffa1be16c0 items=0 ppid=3252 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:32.734000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:32.760000 audit[5012]: NETFILTER_CFG table=nat:124 family=2 entries=72 op=nft_register_chain pid=5012 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:32.760000 audit[5012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffe8816580 a2=0 a3=ffffa1be16c0 items=0 ppid=3252 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:32.760000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:33.350389 systemd[1]: run-containerd-runc-k8s.io-29132a5ada4c7efd8b06cf8e8587afd089835cf7869a279a10998ceb12a16738-runc.BI87tA.mount: Deactivated successfully. Feb 9 09:48:33.463353 kubelet[3093]: I0209 09:48:33.463282 3093 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fcdf54d4d-n9f7w" podStartSLOduration=-9.223372003391552e+09 pod.CreationTimestamp="2024-02-09 09:48:00 +0000 UTC" firstStartedPulling="2024-02-09 09:48:27.806815878 +0000 UTC m=+49.455951803" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:33.34848638 +0000 UTC m=+54.997622341" watchObservedRunningTime="2024-02-09 09:48:33.463224157 +0000 UTC m=+55.112360106" Feb 9 09:48:34.038698 env[1801]: time="2024-02-09T09:48:34.038639174Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:34.044597 env[1801]: time="2024-02-09T09:48:34.044500903Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:34.048477 env[1801]: time="2024-02-09T09:48:34.048421176Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:34.051588 env[1801]: time="2024-02-09T09:48:34.051509921Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:34.052434 env[1801]: time="2024-02-09T09:48:34.052374267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8\"" Feb 9 09:48:34.056799 env[1801]: time="2024-02-09T09:48:34.056744448Z" level=info msg="CreateContainer within sandbox \"16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 09:48:34.105950 env[1801]: time="2024-02-09T09:48:34.105834801Z" level=info msg="CreateContainer within sandbox \"16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"82f4d3dcfc13d6600d7ac4fe2c0b0c8e3a6d60ef3e9419c129a78e0c0bd3ca50\"" Feb 9 09:48:34.107062 env[1801]: time="2024-02-09T09:48:34.107014080Z" level=info msg="StartContainer for \"82f4d3dcfc13d6600d7ac4fe2c0b0c8e3a6d60ef3e9419c129a78e0c0bd3ca50\"" Feb 9 09:48:34.424363 env[1801]: time="2024-02-09T09:48:34.424219614Z" level=info msg="StartContainer for \"82f4d3dcfc13d6600d7ac4fe2c0b0c8e3a6d60ef3e9419c129a78e0c0bd3ca50\" returns successfully" Feb 9 09:48:34.426781 env[1801]: time="2024-02-09T09:48:34.426715077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 09:48:36.237995 env[1801]: time="2024-02-09T09:48:36.237916264Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:36.242617 env[1801]: time="2024-02-09T09:48:36.242518129Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:36.246731 env[1801]: time="2024-02-09T09:48:36.246663063Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:36.250740 env[1801]: time="2024-02-09T09:48:36.250672062Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:36.253318 env[1801]: time="2024-02-09T09:48:36.252244533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43\"" Feb 9 09:48:36.261707 env[1801]: time="2024-02-09T09:48:36.261635459Z" level=info msg="CreateContainer within sandbox \"16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 09:48:36.286218 env[1801]: time="2024-02-09T09:48:36.286138749Z" level=info msg="CreateContainer within sandbox \"16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"aff75c197363b18acc6f6da2f9c7ce773dcf221fedcc76e154c1a381982123e0\"" Feb 9 09:48:36.287285 env[1801]: time="2024-02-09T09:48:36.287233652Z" level=info msg="StartContainer for \"aff75c197363b18acc6f6da2f9c7ce773dcf221fedcc76e154c1a381982123e0\"" Feb 9 09:48:36.373334 systemd[1]: run-containerd-runc-k8s.io-aff75c197363b18acc6f6da2f9c7ce773dcf221fedcc76e154c1a381982123e0-runc.btaaCx.mount: Deactivated successfully. Feb 9 09:48:36.486067 env[1801]: time="2024-02-09T09:48:36.485982299Z" level=info msg="StartContainer for \"aff75c197363b18acc6f6da2f9c7ce773dcf221fedcc76e154c1a381982123e0\" returns successfully" Feb 9 09:48:37.179039 kubelet[3093]: I0209 09:48:37.178981 3093 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 09:48:37.179039 kubelet[3093]: I0209 09:48:37.179032 3093 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 09:48:37.358861 kubelet[3093]: I0209 09:48:37.358814 3093 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-ptrbz" podStartSLOduration=-9.223371999496017e+09 pod.CreationTimestamp="2024-02-09 09:48:00 +0000 UTC" firstStartedPulling="2024-02-09 09:48:30.305241644 +0000 UTC m=+51.954377581" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:37.358387567 +0000 UTC m=+59.007523528" watchObservedRunningTime="2024-02-09 09:48:37.358758528 +0000 UTC m=+59.007894465" Feb 9 09:48:38.683033 env[1801]: time="2024-02-09T09:48:38.682973354Z" level=info msg="StopPodSandbox for \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\"" Feb 9 09:48:38.886032 env[1801]: 2024-02-09 09:48:38.788 [WARNING][5123] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0", GenerateName:"calico-kube-controllers-6fcdf54d4d-", Namespace:"calico-system", SelfLink:"", UID:"e000c922-03b1-4fd6-9ba0-d228ef27458c", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fcdf54d4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0", Pod:"calico-kube-controllers-6fcdf54d4d-n9f7w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f54e13fb3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:38.886032 env[1801]: 2024-02-09 09:48:38.789 [INFO][5123] k8s.go 578: Cleaning up netns ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Feb 9 09:48:38.886032 env[1801]: 2024-02-09 09:48:38.789 [INFO][5123] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" iface="eth0" netns="" Feb 9 09:48:38.886032 env[1801]: 2024-02-09 09:48:38.789 [INFO][5123] k8s.go 585: Releasing IP address(es) ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Feb 9 09:48:38.886032 env[1801]: 2024-02-09 09:48:38.789 [INFO][5123] utils.go 188: Calico CNI releasing IP address ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Feb 9 09:48:38.886032 env[1801]: 2024-02-09 09:48:38.857 [INFO][5130] ipam_plugin.go 415: Releasing address using handleID ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" HandleID="k8s-pod-network.95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Workload="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:38.886032 env[1801]: 2024-02-09 09:48:38.858 [INFO][5130] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:38.886032 env[1801]: 2024-02-09 09:48:38.858 [INFO][5130] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:38.886032 env[1801]: 2024-02-09 09:48:38.874 [WARNING][5130] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" HandleID="k8s-pod-network.95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Workload="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:38.886032 env[1801]: 2024-02-09 09:48:38.874 [INFO][5130] ipam_plugin.go 443: Releasing address using workloadID ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" HandleID="k8s-pod-network.95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Workload="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:38.886032 env[1801]: 2024-02-09 09:48:38.880 [INFO][5130] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:38.886032 env[1801]: 2024-02-09 09:48:38.882 [INFO][5123] k8s.go 591: Teardown processing complete. ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Feb 9 09:48:38.887028 env[1801]: time="2024-02-09T09:48:38.886075682Z" level=info msg="TearDown network for sandbox \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\" successfully" Feb 9 09:48:38.887028 env[1801]: time="2024-02-09T09:48:38.886123354Z" level=info msg="StopPodSandbox for \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\" returns successfully" Feb 9 09:48:38.887512 env[1801]: time="2024-02-09T09:48:38.887461092Z" level=info msg="RemovePodSandbox for \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\"" Feb 9 09:48:38.887843 env[1801]: time="2024-02-09T09:48:38.887778299Z" level=info msg="Forcibly stopping sandbox \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\"" Feb 9 09:48:39.128652 env[1801]: 2024-02-09 09:48:39.010 [WARNING][5149] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0", GenerateName:"calico-kube-controllers-6fcdf54d4d-", Namespace:"calico-system", SelfLink:"", UID:"e000c922-03b1-4fd6-9ba0-d228ef27458c", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fcdf54d4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"dc9006bd1c4cae56b9c3c5727d16882a21665f3ce04a0e38c5ef36632c2538b0", Pod:"calico-kube-controllers-6fcdf54d4d-n9f7w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f54e13fb3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:39.128652 env[1801]: 2024-02-09 09:48:39.011 [INFO][5149] k8s.go 578: Cleaning up netns ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Feb 9 09:48:39.128652 env[1801]: 2024-02-09 09:48:39.011 [INFO][5149] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" iface="eth0" netns="" Feb 9 09:48:39.128652 env[1801]: 2024-02-09 09:48:39.011 [INFO][5149] k8s.go 585: Releasing IP address(es) ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Feb 9 09:48:39.128652 env[1801]: 2024-02-09 09:48:39.011 [INFO][5149] utils.go 188: Calico CNI releasing IP address ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Feb 9 09:48:39.128652 env[1801]: 2024-02-09 09:48:39.090 [INFO][5155] ipam_plugin.go 415: Releasing address using handleID ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" HandleID="k8s-pod-network.95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Workload="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:39.128652 env[1801]: 2024-02-09 09:48:39.090 [INFO][5155] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:39.128652 env[1801]: 2024-02-09 09:48:39.090 [INFO][5155] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:39.128652 env[1801]: 2024-02-09 09:48:39.105 [WARNING][5155] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" HandleID="k8s-pod-network.95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Workload="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:39.128652 env[1801]: 2024-02-09 09:48:39.105 [INFO][5155] ipam_plugin.go 443: Releasing address using workloadID ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" HandleID="k8s-pod-network.95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Workload="ip--172--31--30--62-k8s-calico--kube--controllers--6fcdf54d4d--n9f7w-eth0" Feb 9 09:48:39.128652 env[1801]: 2024-02-09 09:48:39.107 [INFO][5155] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:39.128652 env[1801]: 2024-02-09 09:48:39.120 [INFO][5149] k8s.go 591: Teardown processing complete. ContainerID="95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3" Feb 9 09:48:39.129939 env[1801]: time="2024-02-09T09:48:39.129890901Z" level=info msg="TearDown network for sandbox \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\" successfully" Feb 9 09:48:39.135554 env[1801]: time="2024-02-09T09:48:39.135489790Z" level=info msg="RemovePodSandbox \"95f857ae8e1ca437d1f063b1aa3055076ca2198e546b6422a9150e4b446c8ce3\" returns successfully" Feb 9 09:48:39.138421 env[1801]: time="2024-02-09T09:48:39.138354671Z" level=info msg="StopPodSandbox for \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\"" Feb 9 09:48:39.326665 env[1801]: 2024-02-09 09:48:39.235 [WARNING][5175] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6105262-fb93-4a15-bf14-4f48140174ba", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625", Pod:"csi-node-driver-ptrbz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0669725dadc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:39.326665 env[1801]: 2024-02-09 09:48:39.235 [INFO][5175] k8s.go 578: Cleaning up netns ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Feb 9 09:48:39.326665 env[1801]: 2024-02-09 09:48:39.235 [INFO][5175] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" iface="eth0" netns="" Feb 9 09:48:39.326665 env[1801]: 2024-02-09 09:48:39.236 [INFO][5175] k8s.go 585: Releasing IP address(es) ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Feb 9 09:48:39.326665 env[1801]: 2024-02-09 09:48:39.236 [INFO][5175] utils.go 188: Calico CNI releasing IP address ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Feb 9 09:48:39.326665 env[1801]: 2024-02-09 09:48:39.304 [INFO][5181] ipam_plugin.go 415: Releasing address using handleID ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" HandleID="k8s-pod-network.be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Workload="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:39.326665 env[1801]: 2024-02-09 09:48:39.305 [INFO][5181] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:39.326665 env[1801]: 2024-02-09 09:48:39.305 [INFO][5181] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:39.326665 env[1801]: 2024-02-09 09:48:39.319 [WARNING][5181] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" HandleID="k8s-pod-network.be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Workload="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:39.326665 env[1801]: 2024-02-09 09:48:39.319 [INFO][5181] ipam_plugin.go 443: Releasing address using workloadID ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" HandleID="k8s-pod-network.be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Workload="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:39.326665 env[1801]: 2024-02-09 09:48:39.321 [INFO][5181] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:39.326665 env[1801]: 2024-02-09 09:48:39.323 [INFO][5175] k8s.go 591: Teardown processing complete. ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Feb 9 09:48:39.327756 env[1801]: time="2024-02-09T09:48:39.327705048Z" level=info msg="TearDown network for sandbox \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\" successfully" Feb 9 09:48:39.327900 env[1801]: time="2024-02-09T09:48:39.327868128Z" level=info msg="StopPodSandbox for \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\" returns successfully" Feb 9 09:48:39.328927 env[1801]: time="2024-02-09T09:48:39.328853482Z" level=info msg="RemovePodSandbox for \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\"" Feb 9 09:48:39.329088 env[1801]: time="2024-02-09T09:48:39.328927288Z" level=info msg="Forcibly stopping sandbox \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\"" Feb 9 09:48:39.494142 env[1801]: 2024-02-09 09:48:39.398 [WARNING][5199] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6105262-fb93-4a15-bf14-4f48140174ba", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"16f00e58914b80b80f1864ba2be6dbb79a47bdbe49e21e760a3da84eb2726625", Pod:"csi-node-driver-ptrbz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0669725dadc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:39.494142 env[1801]: 2024-02-09 09:48:39.399 [INFO][5199] k8s.go 578: Cleaning up netns ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Feb 9 09:48:39.494142 env[1801]: 2024-02-09 09:48:39.399 [INFO][5199] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" iface="eth0" netns="" Feb 9 09:48:39.494142 env[1801]: 2024-02-09 09:48:39.399 [INFO][5199] k8s.go 585: Releasing IP address(es) ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Feb 9 09:48:39.494142 env[1801]: 2024-02-09 09:48:39.399 [INFO][5199] utils.go 188: Calico CNI releasing IP address ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Feb 9 09:48:39.494142 env[1801]: 2024-02-09 09:48:39.448 [INFO][5205] ipam_plugin.go 415: Releasing address using handleID ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" HandleID="k8s-pod-network.be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Workload="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:39.494142 env[1801]: 2024-02-09 09:48:39.448 [INFO][5205] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:39.494142 env[1801]: 2024-02-09 09:48:39.449 [INFO][5205] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:39.494142 env[1801]: 2024-02-09 09:48:39.471 [WARNING][5205] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" HandleID="k8s-pod-network.be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Workload="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:39.494142 env[1801]: 2024-02-09 09:48:39.474 [INFO][5205] ipam_plugin.go 443: Releasing address using workloadID ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" HandleID="k8s-pod-network.be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Workload="ip--172--31--30--62-k8s-csi--node--driver--ptrbz-eth0" Feb 9 09:48:39.494142 env[1801]: 2024-02-09 09:48:39.479 [INFO][5205] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:39.494142 env[1801]: 2024-02-09 09:48:39.490 [INFO][5199] k8s.go 591: Teardown processing complete. ContainerID="be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5" Feb 9 09:48:39.497029 env[1801]: time="2024-02-09T09:48:39.494102931Z" level=info msg="TearDown network for sandbox \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\" successfully" Feb 9 09:48:39.512775 env[1801]: time="2024-02-09T09:48:39.512705438Z" level=info msg="RemovePodSandbox \"be2efab2d51139abc084d5e1d12efb2a8bbb57ee59b0ce366364d322b43fccf5\" returns successfully" Feb 9 09:48:39.513556 env[1801]: time="2024-02-09T09:48:39.513511886Z" level=info msg="StopPodSandbox for \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\"" Feb 9 09:48:39.649057 kubelet[3093]: I0209 09:48:39.648974 3093 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:39.686315 kubelet[3093]: I0209 09:48:39.686242 3093 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:39.774092 kubelet[3093]: I0209 09:48:39.773841 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2666086d-fd72-4b2b-a4a3-9466682455d1-calico-apiserver-certs\") pod \"calico-apiserver-85f7786ffb-r2rrt\" (UID: \"2666086d-fd72-4b2b-a4a3-9466682455d1\") " pod="calico-apiserver/calico-apiserver-85f7786ffb-r2rrt" Feb 9 09:48:39.774092 kubelet[3093]: I0209 09:48:39.773926 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfrjz\" (UniqueName: \"kubernetes.io/projected/2666086d-fd72-4b2b-a4a3-9466682455d1-kube-api-access-mfrjz\") pod \"calico-apiserver-85f7786ffb-r2rrt\" (UID: \"2666086d-fd72-4b2b-a4a3-9466682455d1\") " pod="calico-apiserver/calico-apiserver-85f7786ffb-r2rrt" Feb 9 09:48:39.774092 kubelet[3093]: I0209 09:48:39.773985 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw2z2\" (UniqueName: \"kubernetes.io/projected/f97db8eb-684f-4649-a41f-7dd6eb854a9f-kube-api-access-jw2z2\") pod \"calico-apiserver-85f7786ffb-x6rjz\" (UID: \"f97db8eb-684f-4649-a41f-7dd6eb854a9f\") " pod="calico-apiserver/calico-apiserver-85f7786ffb-x6rjz" Feb 9 09:48:39.774092 kubelet[3093]: I0209 09:48:39.774042 3093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f97db8eb-684f-4649-a41f-7dd6eb854a9f-calico-apiserver-certs\") pod \"calico-apiserver-85f7786ffb-x6rjz\" (UID: \"f97db8eb-684f-4649-a41f-7dd6eb854a9f\") " pod="calico-apiserver/calico-apiserver-85f7786ffb-x6rjz" Feb 9 09:48:39.827509 env[1801]: 2024-02-09 09:48:39.723 [WARNING][5223] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"ee11eee1-ab84-4220-bd28-7c74f0bfcde8", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c", Pod:"coredns-787d4945fb-gm6gf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9442450294f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:39.827509 env[1801]: 2024-02-09 09:48:39.723 [INFO][5223] k8s.go 578: Cleaning up netns ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Feb 9 09:48:39.827509 env[1801]: 2024-02-09 09:48:39.724 [INFO][5223] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" iface="eth0" netns="" Feb 9 09:48:39.827509 env[1801]: 2024-02-09 09:48:39.724 [INFO][5223] k8s.go 585: Releasing IP address(es) ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Feb 9 09:48:39.827509 env[1801]: 2024-02-09 09:48:39.724 [INFO][5223] utils.go 188: Calico CNI releasing IP address ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Feb 9 09:48:39.827509 env[1801]: 2024-02-09 09:48:39.802 [INFO][5230] ipam_plugin.go 415: Releasing address using handleID ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" HandleID="k8s-pod-network.330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:39.827509 env[1801]: 2024-02-09 09:48:39.802 [INFO][5230] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:39.827509 env[1801]: 2024-02-09 09:48:39.802 [INFO][5230] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:39.827509 env[1801]: 2024-02-09 09:48:39.819 [WARNING][5230] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" HandleID="k8s-pod-network.330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:39.827509 env[1801]: 2024-02-09 09:48:39.819 [INFO][5230] ipam_plugin.go 443: Releasing address using workloadID ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" HandleID="k8s-pod-network.330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:39.827509 env[1801]: 2024-02-09 09:48:39.821 [INFO][5230] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:39.827509 env[1801]: 2024-02-09 09:48:39.824 [INFO][5223] k8s.go 591: Teardown processing complete. ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Feb 9 09:48:39.831179 env[1801]: time="2024-02-09T09:48:39.831120067Z" level=info msg="TearDown network for sandbox \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\" successfully" Feb 9 09:48:39.831386 env[1801]: time="2024-02-09T09:48:39.831351121Z" level=info msg="StopPodSandbox for \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\" returns successfully" Feb 9 09:48:39.832821 env[1801]: time="2024-02-09T09:48:39.832769463Z" level=info msg="RemovePodSandbox for \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\"" Feb 9 09:48:39.833211 env[1801]: time="2024-02-09T09:48:39.833087391Z" level=info msg="Forcibly stopping sandbox \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\"" Feb 9 09:48:39.877139 kubelet[3093]: E0209 09:48:39.876631 3093 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 09:48:39.877139 kubelet[3093]: E0209 09:48:39.876765 3093 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f97db8eb-684f-4649-a41f-7dd6eb854a9f-calico-apiserver-certs podName:f97db8eb-684f-4649-a41f-7dd6eb854a9f nodeName:}" failed. No retries permitted until 2024-02-09 09:48:40.376732778 +0000 UTC m=+62.025868703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/f97db8eb-684f-4649-a41f-7dd6eb854a9f-calico-apiserver-certs") pod "calico-apiserver-85f7786ffb-x6rjz" (UID: "f97db8eb-684f-4649-a41f-7dd6eb854a9f") : secret "calico-apiserver-certs" not found Feb 9 09:48:39.877139 kubelet[3093]: E0209 09:48:39.876631 3093 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 09:48:39.877139 kubelet[3093]: E0209 09:48:39.876976 3093 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2666086d-fd72-4b2b-a4a3-9466682455d1-calico-apiserver-certs podName:2666086d-fd72-4b2b-a4a3-9466682455d1 nodeName:}" failed. No retries permitted until 2024-02-09 09:48:40.376957245 +0000 UTC m=+62.026093170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/2666086d-fd72-4b2b-a4a3-9466682455d1-calico-apiserver-certs") pod "calico-apiserver-85f7786ffb-r2rrt" (UID: "2666086d-fd72-4b2b-a4a3-9466682455d1") : secret "calico-apiserver-certs" not found Feb 9 09:48:39.989383 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 09:48:39.989628 kernel: audit: type=1325 audit(1707472119.984:312): table=filter:125 family=2 entries=6 op=nft_register_rule pid=5278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:39.984000 audit[5278]: NETFILTER_CFG table=filter:125 family=2 entries=6 op=nft_register_rule pid=5278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:39.984000 audit[5278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffec3f4160 a2=0 a3=ffffbe0136c0 items=0 ppid=3252 pid=5278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:40.009724 kernel: audit: type=1300 audit(1707472119.984:312): arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffec3f4160 a2=0 a3=ffffbe0136c0 items=0 ppid=3252 pid=5278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:39.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:40.022263 kernel: audit: type=1327 audit(1707472119.984:312): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:40.037000 audit[5278]: NETFILTER_CFG table=nat:126 family=2 entries=78 op=nft_register_rule pid=5278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:40.046709 kernel: audit: type=1325 audit(1707472120.037:313): table=nat:126 family=2 entries=78 op=nft_register_rule pid=5278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:40.037000 audit[5278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffec3f4160 a2=0 a3=ffffbe0136c0 items=0 ppid=3252 pid=5278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:40.037000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:40.073158 kernel: audit: type=1300 audit(1707472120.037:313): arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffec3f4160 a2=0 a3=ffffbe0136c0 items=0 ppid=3252 pid=5278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:40.073330 kernel: audit: type=1327 audit(1707472120.037:313): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:40.183693 env[1801]: 2024-02-09 09:48:40.074 [WARNING][5264] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"ee11eee1-ab84-4220-bd28-7c74f0bfcde8", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"449cd8107ed99986e48cf7954fe669f5a411c17bd9f0d1ee2c499e9cd1cd945c", Pod:"coredns-787d4945fb-gm6gf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9442450294f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:40.183693 env[1801]: 2024-02-09 09:48:40.074 [INFO][5264] k8s.go 578: Cleaning up netns ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Feb 9 09:48:40.183693 env[1801]: 2024-02-09 09:48:40.074 [INFO][5264] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" iface="eth0" netns="" Feb 9 09:48:40.183693 env[1801]: 2024-02-09 09:48:40.074 [INFO][5264] k8s.go 585: Releasing IP address(es) ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Feb 9 09:48:40.183693 env[1801]: 2024-02-09 09:48:40.074 [INFO][5264] utils.go 188: Calico CNI releasing IP address ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Feb 9 09:48:40.183693 env[1801]: 2024-02-09 09:48:40.157 [INFO][5283] ipam_plugin.go 415: Releasing address using handleID ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" HandleID="k8s-pod-network.330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:40.183693 env[1801]: 2024-02-09 09:48:40.157 [INFO][5283] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:40.183693 env[1801]: 2024-02-09 09:48:40.161 [INFO][5283] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:40.183693 env[1801]: 2024-02-09 09:48:40.175 [WARNING][5283] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" HandleID="k8s-pod-network.330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:40.183693 env[1801]: 2024-02-09 09:48:40.175 [INFO][5283] ipam_plugin.go 443: Releasing address using workloadID ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" HandleID="k8s-pod-network.330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--gm6gf-eth0" Feb 9 09:48:40.183693 env[1801]: 2024-02-09 09:48:40.178 [INFO][5283] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:40.183693 env[1801]: 2024-02-09 09:48:40.181 [INFO][5264] k8s.go 591: Teardown processing complete. ContainerID="330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68" Feb 9 09:48:40.184841 env[1801]: time="2024-02-09T09:48:40.184784715Z" level=info msg="TearDown network for sandbox \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\" successfully" Feb 9 09:48:40.192804 env[1801]: time="2024-02-09T09:48:40.192738535Z" level=info msg="RemovePodSandbox \"330f2c35210d5de7f261cc963261b3053af53947771ef2132b6ff0dde3129d68\" returns successfully" Feb 9 09:48:40.195232 env[1801]: time="2024-02-09T09:48:40.194984081Z" level=info msg="StopPodSandbox for \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\"" Feb 9 09:48:40.329000 audit[5338]: NETFILTER_CFG table=filter:127 family=2 entries=7 op=nft_register_rule pid=5338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:40.329000 audit[5338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffe562f130 a2=0 a3=ffffa03ac6c0 items=0 ppid=3252 pid=5338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:40.349372 kernel: audit: type=1325 audit(1707472120.329:314): table=filter:127 family=2 entries=7 op=nft_register_rule pid=5338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:40.353037 kernel: audit: type=1300 audit(1707472120.329:314): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffe562f130 a2=0 a3=ffffa03ac6c0 items=0 ppid=3252 pid=5338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:40.356594 kernel: audit: type=1327 audit(1707472120.329:314): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:40.329000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:40.333000 audit[5338]: NETFILTER_CFG table=nat:128 family=2 entries=78 op=nft_register_rule pid=5338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:40.371185 kernel: audit: type=1325 audit(1707472120.333:315): table=nat:128 family=2 entries=78 op=nft_register_rule pid=5338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:40.333000 audit[5338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffe562f130 a2=0 a3=ffffa03ac6c0 items=0 ppid=3252 pid=5338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:40.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:40.436939 env[1801]: 2024-02-09 09:48:40.301 [WARNING][5317] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"d7a41dab-c1c4-470c-a625-78b48d2cd3c8", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a", Pod:"coredns-787d4945fb-zgw6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81bfaa31879", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:40.436939 env[1801]: 2024-02-09 09:48:40.301 [INFO][5317] k8s.go 578: Cleaning up netns ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Feb 9 09:48:40.436939 env[1801]: 2024-02-09 09:48:40.301 [INFO][5317] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" iface="eth0" netns="" Feb 9 09:48:40.436939 env[1801]: 2024-02-09 09:48:40.302 [INFO][5317] k8s.go 585: Releasing IP address(es) ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Feb 9 09:48:40.436939 env[1801]: 2024-02-09 09:48:40.302 [INFO][5317] utils.go 188: Calico CNI releasing IP address ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Feb 9 09:48:40.436939 env[1801]: 2024-02-09 09:48:40.368 [INFO][5331] ipam_plugin.go 415: Releasing address using handleID ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" HandleID="k8s-pod-network.7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:40.436939 env[1801]: 2024-02-09 09:48:40.368 [INFO][5331] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:40.436939 env[1801]: 2024-02-09 09:48:40.368 [INFO][5331] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:40.436939 env[1801]: 2024-02-09 09:48:40.422 [WARNING][5331] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" HandleID="k8s-pod-network.7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:40.436939 env[1801]: 2024-02-09 09:48:40.423 [INFO][5331] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" HandleID="k8s-pod-network.7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:40.436939 env[1801]: 2024-02-09 09:48:40.428 [INFO][5331] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:40.436939 env[1801]: 2024-02-09 09:48:40.434 [INFO][5317] k8s.go 591: Teardown processing complete. ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Feb 9 09:48:40.437928 env[1801]: time="2024-02-09T09:48:40.436997040Z" level=info msg="TearDown network for sandbox \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\" successfully" Feb 9 09:48:40.437928 env[1801]: time="2024-02-09T09:48:40.437052020Z" level=info msg="StopPodSandbox for \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\" returns successfully" Feb 9 09:48:40.437928 env[1801]: time="2024-02-09T09:48:40.437799807Z" level=info msg="RemovePodSandbox for \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\"" Feb 9 09:48:40.437928 env[1801]: time="2024-02-09T09:48:40.437849940Z" level=info msg="Forcibly stopping sandbox \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\"" Feb 9 09:48:40.559889 env[1801]: time="2024-02-09T09:48:40.559517987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85f7786ffb-x6rjz,Uid:f97db8eb-684f-4649-a41f-7dd6eb854a9f,Namespace:calico-apiserver,Attempt:0,}" Feb 9 09:48:40.614628 env[1801]: 2024-02-09 09:48:40.518 [WARNING][5356] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"d7a41dab-c1c4-470c-a625-78b48d2cd3c8", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"01df923f4aa76bcd3690570c574be7ec74c82e0d29b71748e13e73e3b6919f2a", Pod:"coredns-787d4945fb-zgw6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81bfaa31879", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:40.614628 env[1801]: 2024-02-09 09:48:40.519 [INFO][5356] k8s.go 578: Cleaning up netns ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Feb 9 09:48:40.614628 env[1801]: 2024-02-09 09:48:40.519 [INFO][5356] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" iface="eth0" netns="" Feb 9 09:48:40.614628 env[1801]: 2024-02-09 09:48:40.519 [INFO][5356] k8s.go 585: Releasing IP address(es) ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Feb 9 09:48:40.614628 env[1801]: 2024-02-09 09:48:40.519 [INFO][5356] utils.go 188: Calico CNI releasing IP address ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Feb 9 09:48:40.614628 env[1801]: 2024-02-09 09:48:40.564 [INFO][5363] ipam_plugin.go 415: Releasing address using handleID ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" HandleID="k8s-pod-network.7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:40.614628 env[1801]: 2024-02-09 09:48:40.564 [INFO][5363] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:40.614628 env[1801]: 2024-02-09 09:48:40.564 [INFO][5363] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:40.614628 env[1801]: 2024-02-09 09:48:40.596 [WARNING][5363] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" HandleID="k8s-pod-network.7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:40.614628 env[1801]: 2024-02-09 09:48:40.596 [INFO][5363] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" HandleID="k8s-pod-network.7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Workload="ip--172--31--30--62-k8s-coredns--787d4945fb--zgw6s-eth0" Feb 9 09:48:40.614628 env[1801]: 2024-02-09 09:48:40.599 [INFO][5363] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:40.614628 env[1801]: 2024-02-09 09:48:40.608 [INFO][5356] k8s.go 591: Teardown processing complete. ContainerID="7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d" Feb 9 09:48:40.614628 env[1801]: time="2024-02-09T09:48:40.613402963Z" level=info msg="TearDown network for sandbox \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\" successfully" Feb 9 09:48:40.617375 env[1801]: time="2024-02-09T09:48:40.617316044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85f7786ffb-r2rrt,Uid:2666086d-fd72-4b2b-a4a3-9466682455d1,Namespace:calico-apiserver,Attempt:0,}" Feb 9 09:48:40.625164 env[1801]: time="2024-02-09T09:48:40.625104359Z" level=info msg="RemovePodSandbox \"7d9ee8ac37dd39ec41a99526e6d1da0f52f2eff47479f972edfd82e7333f718d\" returns successfully" Feb 9 09:48:40.963728 systemd-networkd[1595]: calia2114aa1150: Link UP Feb 9 09:48:40.975441 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:48:40.975607 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia2114aa1150: link becomes ready Feb 9 09:48:40.975827 systemd-networkd[1595]: calia2114aa1150: Gained carrier Feb 9 09:48:40.975894 (udev-worker)[5408]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.692 [INFO][5369] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--x6rjz-eth0 calico-apiserver-85f7786ffb- calico-apiserver f97db8eb-684f-4649-a41f-7dd6eb854a9f 815 0 2024-02-09 09:48:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85f7786ffb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-62 calico-apiserver-85f7786ffb-x6rjz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia2114aa1150 [] []}} ContainerID="779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-x6rjz" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--x6rjz-" Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.693 [INFO][5369] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-x6rjz" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--x6rjz-eth0" Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.821 [INFO][5393] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" HandleID="k8s-pod-network.779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" Workload="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--x6rjz-eth0" Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.857 [INFO][5393] ipam_plugin.go 268: Auto assigning IP ContainerID="779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" HandleID="k8s-pod-network.779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" Workload="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--x6rjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002bc7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-62", "pod":"calico-apiserver-85f7786ffb-x6rjz", "timestamp":"2024-02-09 09:48:40.821082884 +0000 UTC"}, Hostname:"ip-172-31-30-62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.858 [INFO][5393] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.858 [INFO][5393] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.858 [INFO][5393] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-62' Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.865 [INFO][5393] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" host="ip-172-31-30-62" Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.877 [INFO][5393] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-62" Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.886 [INFO][5393] ipam.go 489: Trying affinity for 192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.892 [INFO][5393] ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.899 [INFO][5393] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.900 [INFO][5393] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" host="ip-172-31-30-62" Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.911 [INFO][5393] ipam.go 1682: Creating new handle: k8s-pod-network.779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.919 [INFO][5393] ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" host="ip-172-31-30-62" Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.937 [INFO][5393] ipam.go 1216: Successfully claimed IPs: [192.168.119.197/26] block=192.168.119.192/26 handle="k8s-pod-network.779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" host="ip-172-31-30-62" Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.938 [INFO][5393] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.197/26] handle="k8s-pod-network.779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" host="ip-172-31-30-62" Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.938 [INFO][5393] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:41.021934 env[1801]: 2024-02-09 09:48:40.938 [INFO][5393] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.119.197/26] IPv6=[] ContainerID="779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" HandleID="k8s-pod-network.779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" Workload="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--x6rjz-eth0" Feb 9 09:48:41.023644 env[1801]: 2024-02-09 09:48:40.957 [INFO][5369] k8s.go 385: Populated endpoint ContainerID="779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-x6rjz" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--x6rjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--x6rjz-eth0", GenerateName:"calico-apiserver-85f7786ffb-", Namespace:"calico-apiserver", SelfLink:"", UID:"f97db8eb-684f-4649-a41f-7dd6eb854a9f", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85f7786ffb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"", Pod:"calico-apiserver-85f7786ffb-x6rjz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2114aa1150", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:41.023644 env[1801]: 2024-02-09 09:48:40.957 [INFO][5369] k8s.go 386: Calico CNI using IPs: [192.168.119.197/32] ContainerID="779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-x6rjz" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--x6rjz-eth0" Feb 9 09:48:41.023644 env[1801]: 2024-02-09 09:48:40.957 [INFO][5369] dataplane_linux.go 68: Setting the host side veth name to calia2114aa1150 ContainerID="779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-x6rjz" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--x6rjz-eth0" Feb 9 09:48:41.023644 env[1801]: 2024-02-09 09:48:40.982 [INFO][5369] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-x6rjz" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--x6rjz-eth0" Feb 9 09:48:41.023644 env[1801]: 2024-02-09 09:48:40.985 [INFO][5369] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-x6rjz" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--x6rjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--x6rjz-eth0", GenerateName:"calico-apiserver-85f7786ffb-", Namespace:"calico-apiserver", SelfLink:"", UID:"f97db8eb-684f-4649-a41f-7dd6eb854a9f", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85f7786ffb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e", Pod:"calico-apiserver-85f7786ffb-x6rjz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2114aa1150", MAC:"5a:54:06:8f:5a:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:41.023644 env[1801]: 2024-02-09 09:48:41.008 [INFO][5369] k8s.go 491: Wrote updated endpoint to datastore ContainerID="779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-x6rjz" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--x6rjz-eth0" Feb 9 09:48:41.058729 (udev-worker)[5417]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:48:41.063491 systemd-networkd[1595]: cali865a43e41bb: Link UP Feb 9 09:48:41.083624 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali865a43e41bb: link becomes ready Feb 9 09:48:41.082481 systemd-networkd[1595]: cali865a43e41bb: Gained carrier Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:40.747 [INFO][5379] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--r2rrt-eth0 calico-apiserver-85f7786ffb- calico-apiserver 2666086d-fd72-4b2b-a4a3-9466682455d1 818 0 2024-02-09 09:48:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85f7786ffb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-62 calico-apiserver-85f7786ffb-r2rrt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali865a43e41bb [] []}} ContainerID="96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-r2rrt" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--r2rrt-" Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:40.747 [INFO][5379] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-r2rrt" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--r2rrt-eth0" Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:40.844 [INFO][5399] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" HandleID="k8s-pod-network.96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" Workload="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--r2rrt-eth0" Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:40.874 [INFO][5399] ipam_plugin.go 268: Auto assigning IP ContainerID="96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" HandleID="k8s-pod-network.96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" Workload="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--r2rrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000203710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-62", "pod":"calico-apiserver-85f7786ffb-r2rrt", "timestamp":"2024-02-09 09:48:40.844678035 +0000 UTC"}, Hostname:"ip-172-31-30-62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:40.875 [INFO][5399] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:40.938 [INFO][5399] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:40.938 [INFO][5399] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-62' Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:40.941 [INFO][5399] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" host="ip-172-31-30-62" Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:40.953 [INFO][5399] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-62" Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:40.969 [INFO][5399] ipam.go 489: Trying affinity for 192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:40.981 [INFO][5399] ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:40.987 [INFO][5399] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ip-172-31-30-62" Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:40.987 [INFO][5399] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" host="ip-172-31-30-62" Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:41.008 [INFO][5399] ipam.go 1682: Creating new handle: k8s-pod-network.96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:41.019 [INFO][5399] ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" host="ip-172-31-30-62" Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:41.036 [INFO][5399] ipam.go 1216: Successfully claimed IPs: [192.168.119.198/26] block=192.168.119.192/26 handle="k8s-pod-network.96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" host="ip-172-31-30-62" Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:41.037 [INFO][5399] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.198/26] handle="k8s-pod-network.96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" host="ip-172-31-30-62" Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:41.037 [INFO][5399] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:48:41.108045 env[1801]: 2024-02-09 09:48:41.037 [INFO][5399] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.119.198/26] IPv6=[] ContainerID="96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" HandleID="k8s-pod-network.96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" Workload="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--r2rrt-eth0" Feb 9 09:48:41.109727 env[1801]: 2024-02-09 09:48:41.053 [INFO][5379] k8s.go 385: Populated endpoint ContainerID="96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-r2rrt" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--r2rrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--r2rrt-eth0", GenerateName:"calico-apiserver-85f7786ffb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2666086d-fd72-4b2b-a4a3-9466682455d1", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85f7786ffb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"", Pod:"calico-apiserver-85f7786ffb-r2rrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali865a43e41bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:41.109727 env[1801]: 2024-02-09 09:48:41.053 [INFO][5379] k8s.go 386: Calico CNI using IPs: [192.168.119.198/32] ContainerID="96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-r2rrt" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--r2rrt-eth0" Feb 9 09:48:41.109727 env[1801]: 2024-02-09 09:48:41.053 [INFO][5379] dataplane_linux.go 68: Setting the host side veth name to cali865a43e41bb ContainerID="96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-r2rrt" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--r2rrt-eth0" Feb 9 09:48:41.109727 env[1801]: 2024-02-09 09:48:41.082 [INFO][5379] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-r2rrt" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--r2rrt-eth0" Feb 9 09:48:41.109727 env[1801]: 2024-02-09 09:48:41.083 [INFO][5379] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-r2rrt" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--r2rrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--r2rrt-eth0", GenerateName:"calico-apiserver-85f7786ffb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2666086d-fd72-4b2b-a4a3-9466682455d1", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85f7786ffb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-62", ContainerID:"96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f", Pod:"calico-apiserver-85f7786ffb-r2rrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali865a43e41bb", MAC:"ca:ee:35:02:de:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:48:41.109727 env[1801]: 2024-02-09 09:48:41.101 [INFO][5379] k8s.go 491: Wrote updated endpoint to datastore ContainerID="96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f" Namespace="calico-apiserver" Pod="calico-apiserver-85f7786ffb-r2rrt" WorkloadEndpoint="ip--172--31--30--62-k8s-calico--apiserver--85f7786ffb--r2rrt-eth0" Feb 9 09:48:41.129527 env[1801]: time="2024-02-09T09:48:41.128509551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:41.129527 env[1801]: time="2024-02-09T09:48:41.128663657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:41.129527 env[1801]: time="2024-02-09T09:48:41.128690307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:41.129527 env[1801]: time="2024-02-09T09:48:41.129129286Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e pid=5440 runtime=io.containerd.runc.v2 Feb 9 09:48:41.162000 audit[5461]: NETFILTER_CFG table=filter:129 family=2 entries=59 op=nft_register_chain pid=5461 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:48:41.162000 audit[5461]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29292 a0=3 a1=ffffd9ba3fb0 a2=0 a3=ffff9c53ffa8 items=0 ppid=4200 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:41.162000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:48:41.205982 env[1801]: time="2024-02-09T09:48:41.196468607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:41.205982 env[1801]: time="2024-02-09T09:48:41.196541490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:41.205982 env[1801]: time="2024-02-09T09:48:41.196585192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:41.205982 env[1801]: time="2024-02-09T09:48:41.197096610Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f pid=5469 runtime=io.containerd.runc.v2 Feb 9 09:48:41.300000 audit[5492]: NETFILTER_CFG table=filter:130 family=2 entries=56 op=nft_register_chain pid=5492 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:48:41.300000 audit[5492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27348 a0=3 a1=fffff32ffd60 a2=0 a3=ffff9e9fefa8 items=0 ppid=4200 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:41.300000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:48:41.396249 env[1801]: time="2024-02-09T09:48:41.396186587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85f7786ffb-x6rjz,Uid:f97db8eb-684f-4649-a41f-7dd6eb854a9f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e\"" Feb 9 09:48:41.413388 env[1801]: time="2024-02-09T09:48:41.411947725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 09:48:41.429470 env[1801]: time="2024-02-09T09:48:41.429412247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85f7786ffb-r2rrt,Uid:2666086d-fd72-4b2b-a4a3-9466682455d1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f\"" Feb 9 09:48:42.330814 systemd-networkd[1595]: cali865a43e41bb: Gained IPv6LL Feb 9 09:48:42.714864 systemd-networkd[1595]: calia2114aa1150: Gained IPv6LL Feb 9 09:48:43.632867 systemd[1]: run-containerd-runc-k8s.io-29132a5ada4c7efd8b06cf8e8587afd089835cf7869a279a10998ceb12a16738-runc.2u8cDJ.mount: Deactivated successfully. Feb 9 09:48:44.716537 env[1801]: time="2024-02-09T09:48:44.716482295Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:44.721556 env[1801]: time="2024-02-09T09:48:44.721506448Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:44.726320 env[1801]: time="2024-02-09T09:48:44.726266172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:44.731420 env[1801]: time="2024-02-09T09:48:44.731365405Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:44.734423 env[1801]: time="2024-02-09T09:48:44.733178219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9\"" Feb 9 09:48:44.736075 env[1801]: time="2024-02-09T09:48:44.736022438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 09:48:44.738688 env[1801]: time="2024-02-09T09:48:44.738626849Z" level=info msg="CreateContainer within sandbox \"779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 09:48:44.768151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370457347.mount: Deactivated successfully. Feb 9 09:48:44.773039 env[1801]: time="2024-02-09T09:48:44.772858905Z" level=info msg="CreateContainer within sandbox \"779ab6f5ba985fe74e3d2c9379aadc4d36684ef127378b685b9221b4ee49d16e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d0d021f75c0561ccfa8d4852bb476ae5033708ecf1e483c12555a7fb378080e8\"" Feb 9 09:48:44.774875 env[1801]: time="2024-02-09T09:48:44.774793141Z" level=info msg="StartContainer for \"d0d021f75c0561ccfa8d4852bb476ae5033708ecf1e483c12555a7fb378080e8\"" Feb 9 09:48:44.954104 env[1801]: time="2024-02-09T09:48:44.953656163Z" level=info msg="StartContainer for \"d0d021f75c0561ccfa8d4852bb476ae5033708ecf1e483c12555a7fb378080e8\" returns successfully" Feb 9 09:48:45.154806 env[1801]: time="2024-02-09T09:48:45.154747617Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:45.159426 env[1801]: time="2024-02-09T09:48:45.159377021Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:45.163888 env[1801]: time="2024-02-09T09:48:45.163828498Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:45.168485 env[1801]: time="2024-02-09T09:48:45.168435212Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:45.170125 env[1801]: time="2024-02-09T09:48:45.170066206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9\"" Feb 9 09:48:45.180624 env[1801]: time="2024-02-09T09:48:45.180532658Z" level=info msg="CreateContainer within sandbox \"96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 09:48:45.208256 env[1801]: time="2024-02-09T09:48:45.208184782Z" level=info msg="CreateContainer within sandbox \"96602e4ce611e738b8303ad72b3de3583bd39f2ff0e0cc53b3607f0b178fd69f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"10e17c726ec2f1b2d7248174df2fb241eacb177362b6aca86949aaf295b12deb\"" Feb 9 09:48:45.214348 env[1801]: time="2024-02-09T09:48:45.214292296Z" level=info msg="StartContainer for \"10e17c726ec2f1b2d7248174df2fb241eacb177362b6aca86949aaf295b12deb\"" Feb 9 09:48:45.351122 env[1801]: time="2024-02-09T09:48:45.351037714Z" level=info msg="StartContainer for \"10e17c726ec2f1b2d7248174df2fb241eacb177362b6aca86949aaf295b12deb\" returns successfully" Feb 9 09:48:45.455309 kubelet[3093]: I0209 09:48:45.454592 3093 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85f7786ffb-x6rjz" podStartSLOduration=-9.223372030400272e+09 pod.CreationTimestamp="2024-02-09 09:48:39 +0000 UTC" firstStartedPulling="2024-02-09 09:48:41.409087018 +0000 UTC m=+63.058222943" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:45.452056709 +0000 UTC m=+67.101192670" watchObservedRunningTime="2024-02-09 09:48:45.454503062 +0000 UTC m=+67.103638999" Feb 9 09:48:45.673000 audit[5639]: NETFILTER_CFG table=filter:131 family=2 entries=8 op=nft_register_rule pid=5639 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:45.676545 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 09:48:45.676668 kernel: audit: type=1325 audit(1707472125.673:318): table=filter:131 family=2 entries=8 op=nft_register_rule pid=5639 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:45.673000 audit[5639]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffcce428d0 a2=0 a3=ffffa8b076c0 items=0 ppid=3252 pid=5639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:45.694695 kernel: audit: type=1300 audit(1707472125.673:318): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffcce428d0 a2=0 a3=ffffa8b076c0 items=0 ppid=3252 pid=5639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:45.673000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:45.702318 kernel: audit: type=1327 audit(1707472125.673:318): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:45.702485 kernel: audit: type=1325 audit(1707472125.683:319): table=nat:132 family=2 entries=78 op=nft_register_rule pid=5639 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:45.683000 audit[5639]: NETFILTER_CFG table=nat:132 family=2 entries=78 op=nft_register_rule pid=5639 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:45.683000 audit[5639]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffcce428d0 a2=0 a3=ffffa8b076c0 items=0 ppid=3252 pid=5639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:45.725318 kernel: audit: type=1300 audit(1707472125.683:319): arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffcce428d0 a2=0 a3=ffffa8b076c0 items=0 ppid=3252 pid=5639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:45.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:45.741299 kernel: audit: type=1327 audit(1707472125.683:319): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:45.762503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580441621.mount: Deactivated successfully. Feb 9 09:48:45.908000 audit[5665]: NETFILTER_CFG table=filter:133 family=2 entries=8 op=nft_register_rule pid=5665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:45.908000 audit[5665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffff5ea200 a2=0 a3=ffffa5fc56c0 items=0 ppid=3252 pid=5665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:45.926596 kernel: audit: type=1325 audit(1707472125.908:320): table=filter:133 family=2 entries=8 op=nft_register_rule pid=5665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:45.926750 kernel: audit: type=1300 audit(1707472125.908:320): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffff5ea200 a2=0 a3=ffffa5fc56c0 items=0 ppid=3252 pid=5665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:45.908000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:45.933781 kernel: audit: type=1327 audit(1707472125.908:320): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:45.940078 kernel: audit: type=1325 audit(1707472125.916:321): table=nat:134 family=2 entries=78 op=nft_register_rule pid=5665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:45.916000 audit[5665]: NETFILTER_CFG table=nat:134 family=2 entries=78 op=nft_register_rule pid=5665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:45.916000 audit[5665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffff5ea200 a2=0 a3=ffffa5fc56c0 items=0 ppid=3252 pid=5665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:45.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:01.621839 systemd[1]: run-containerd-runc-k8s.io-f32f686d7761da23a9ab57c3c42e55447a31ba7a09ecf86226656ab56f20a996-runc.t7oZdv.mount: Deactivated successfully. Feb 9 09:49:09.729900 systemd[1]: Started sshd@7-172.31.30.62:22-139.178.89.65:39072.service. Feb 9 09:49:09.741395 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 9 09:49:09.741463 kernel: audit: type=1130 audit(1707472149.728:322): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.30.62:22-139.178.89.65:39072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:09.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.30.62:22-139.178.89.65:39072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:09.934000 audit[5715]: USER_ACCT pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:09.937214 sshd[5715]: Accepted publickey for core from 139.178.89.65 port 39072 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:49:09.946749 kernel: audit: type=1101 audit(1707472149.934:323): pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:09.946000 audit[5715]: CRED_ACQ pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:09.949736 sshd[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:49:09.964408 kernel: audit: type=1103 audit(1707472149.946:324): pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:09.964522 kernel: audit: type=1006 audit(1707472149.946:325): pid=5715 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Feb 9 09:49:09.946000 audit[5715]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd2dfedf0 a2=3 a3=1 items=0 ppid=1 pid=5715 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:09.975138 kernel: audit: type=1300 audit(1707472149.946:325): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd2dfedf0 a2=3 a3=1 items=0 ppid=1 pid=5715 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:09.946000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:09.979232 kernel: audit: type=1327 audit(1707472149.946:325): proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:09.985729 systemd-logind[1793]: New session 8 of user core. Feb 9 09:49:09.986322 systemd[1]: Started session-8.scope. Feb 9 09:49:09.998000 audit[5715]: USER_START pid=5715 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:10.013752 kernel: audit: type=1105 audit(1707472149.998:326): pid=5715 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:10.013886 kernel: audit: type=1103 audit(1707472150.011:327): pid=5718 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:10.011000 audit[5718]: CRED_ACQ pid=5718 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:10.360924 sshd[5715]: pam_unix(sshd:session): session closed for user core Feb 9 09:49:10.361000 audit[5715]: USER_END pid=5715 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:10.367328 systemd[1]: sshd@7-172.31.30.62:22-139.178.89.65:39072.service: Deactivated successfully. Feb 9 09:49:10.369301 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 09:49:10.363000 audit[5715]: CRED_DISP pid=5715 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:10.385187 kernel: audit: type=1106 audit(1707472150.361:328): pid=5715 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:10.385340 kernel: audit: type=1104 audit(1707472150.363:329): pid=5715 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:10.385917 systemd-logind[1793]: Session 8 logged out. Waiting for processes to exit. Feb 9 09:49:10.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.30.62:22-139.178.89.65:39072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:10.388088 systemd-logind[1793]: Removed session 8. Feb 9 09:49:10.593987 systemd[1]: run-containerd-runc-k8s.io-d0d021f75c0561ccfa8d4852bb476ae5033708ecf1e483c12555a7fb378080e8-runc.XZnaxL.mount: Deactivated successfully. Feb 9 09:49:10.657398 systemd[1]: run-containerd-runc-k8s.io-10e17c726ec2f1b2d7248174df2fb241eacb177362b6aca86949aaf295b12deb-runc.zf2b9m.mount: Deactivated successfully. Feb 9 09:49:10.717613 kubelet[3093]: I0209 09:49:10.715410 3093 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85f7786ffb-r2rrt" podStartSLOduration=-9.223372005139425e+09 pod.CreationTimestamp="2024-02-09 09:48:39 +0000 UTC" firstStartedPulling="2024-02-09 09:48:41.432120396 +0000 UTC m=+63.081256321" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:45.485967671 +0000 UTC m=+67.135103632" watchObservedRunningTime="2024-02-09 09:49:10.7153512 +0000 UTC m=+92.364487125" Feb 9 09:49:10.863000 audit[5797]: NETFILTER_CFG table=filter:135 family=2 entries=7 op=nft_register_rule pid=5797 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:10.863000 audit[5797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffc1330ee0 a2=0 a3=ffff8b9f16c0 items=0 ppid=3252 pid=5797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:10.863000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:10.867000 audit[5797]: NETFILTER_CFG table=nat:136 family=2 entries=85 op=nft_register_chain pid=5797 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:10.867000 audit[5797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28484 a0=3 a1=ffffc1330ee0 a2=0 a3=ffff8b9f16c0 items=0 ppid=3252 pid=5797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:10.867000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:10.954000 audit[5823]: NETFILTER_CFG table=filter:137 family=2 entries=6 op=nft_register_rule pid=5823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:10.954000 audit[5823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffc37bbda0 a2=0 a3=ffff82b396c0 items=0 ppid=3252 pid=5823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:10.954000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:10.962000 audit[5823]: NETFILTER_CFG table=nat:138 family=2 entries=92 op=nft_register_chain pid=5823 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:10.962000 audit[5823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30372 a0=3 a1=ffffc37bbda0 a2=0 a3=ffff82b396c0 items=0 ppid=3252 pid=5823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:10.962000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:13.596993 systemd[1]: run-containerd-runc-k8s.io-29132a5ada4c7efd8b06cf8e8587afd089835cf7869a279a10998ceb12a16738-runc.ogIWuU.mount: Deactivated successfully. Feb 9 09:49:14.150240 systemd[1]: run-containerd-runc-k8s.io-29132a5ada4c7efd8b06cf8e8587afd089835cf7869a279a10998ceb12a16738-runc.xS2mr4.mount: Deactivated successfully. Feb 9 09:49:15.388105 systemd[1]: Started sshd@8-172.31.30.62:22-139.178.89.65:39084.service. Feb 9 09:49:15.400137 kernel: kauditd_printk_skb: 13 callbacks suppressed Feb 9 09:49:15.400267 kernel: audit: type=1130 audit(1707472155.387:335): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.30.62:22-139.178.89.65:39084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:15.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.30.62:22-139.178.89.65:39084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:15.571000 audit[5862]: USER_ACCT pid=5862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:15.573542 sshd[5862]: Accepted publickey for core from 139.178.89.65 port 39084 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:49:15.577462 sshd[5862]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:49:15.575000 audit[5862]: CRED_ACQ pid=5862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:15.601395 kernel: audit: type=1101 audit(1707472155.571:336): pid=5862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:15.601560 kernel: audit: type=1103 audit(1707472155.575:337): pid=5862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:15.601635 kernel: audit: type=1006 audit(1707472155.575:338): pid=5862 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Feb 9 09:49:15.593916 systemd[1]: Started session-9.scope. Feb 9 09:49:15.575000 audit[5862]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe827be70 a2=3 a3=1 items=0 ppid=1 pid=5862 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:15.608930 systemd-logind[1793]: New session 9 of user core. Feb 9 09:49:15.618337 kernel: audit: type=1300 audit(1707472155.575:338): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe827be70 a2=3 a3=1 items=0 ppid=1 pid=5862 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:15.575000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:15.628349 kernel: audit: type=1327 audit(1707472155.575:338): proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:15.634000 audit[5862]: USER_START pid=5862 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:15.644000 audit[5865]: CRED_ACQ pid=5865 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:15.657071 kernel: audit: type=1105 audit(1707472155.634:339): pid=5862 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:15.657350 kernel: audit: type=1103 audit(1707472155.644:340): pid=5865 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:15.954109 sshd[5862]: pam_unix(sshd:session): session closed for user core Feb 9 09:49:15.954000 audit[5862]: USER_END pid=5862 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:15.959602 systemd[1]: sshd@8-172.31.30.62:22-139.178.89.65:39084.service: Deactivated successfully. Feb 9 09:49:15.961385 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 09:49:15.955000 audit[5862]: CRED_DISP pid=5862 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:15.968925 systemd-logind[1793]: Session 9 logged out. Waiting for processes to exit. Feb 9 09:49:15.978355 kernel: audit: type=1106 audit(1707472155.954:341): pid=5862 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:15.978535 kernel: audit: type=1104 audit(1707472155.955:342): pid=5862 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:15.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.30.62:22-139.178.89.65:39084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:15.979558 systemd-logind[1793]: Removed session 9. Feb 9 09:49:20.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.30.62:22-139.178.89.65:34072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:20.979193 systemd[1]: Started sshd@9-172.31.30.62:22-139.178.89.65:34072.service. Feb 9 09:49:20.990352 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 09:49:20.990462 kernel: audit: type=1130 audit(1707472160.977:344): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.30.62:22-139.178.89.65:34072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:21.164000 audit[5877]: USER_ACCT pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:21.166061 sshd[5877]: Accepted publickey for core from 139.178.89.65 port 34072 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:49:21.177607 kernel: audit: type=1101 audit(1707472161.164:345): pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:21.176000 audit[5877]: CRED_ACQ pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:21.182920 sshd[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:49:21.194144 kernel: audit: type=1103 audit(1707472161.176:346): pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:21.194272 kernel: audit: type=1006 audit(1707472161.176:347): pid=5877 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 9 09:49:21.176000 audit[5877]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff14a7da0 a2=3 a3=1 items=0 ppid=1 pid=5877 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:21.204864 kernel: audit: type=1300 audit(1707472161.176:347): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff14a7da0 a2=3 a3=1 items=0 ppid=1 pid=5877 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:21.176000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:21.205627 kernel: audit: type=1327 audit(1707472161.176:347): proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:21.216177 systemd[1]: Started session-10.scope. Feb 9 09:49:21.216540 systemd-logind[1793]: New session 10 of user core. Feb 9 09:49:21.230000 audit[5877]: USER_START pid=5877 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:21.230000 audit[5880]: CRED_ACQ pid=5880 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:21.252406 kernel: audit: type=1105 audit(1707472161.230:348): pid=5877 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:21.252751 kernel: audit: type=1103 audit(1707472161.230:349): pid=5880 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:21.472927 sshd[5877]: pam_unix(sshd:session): session closed for user core Feb 9 09:49:21.473000 audit[5877]: USER_END pid=5877 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:21.478404 systemd[1]: sshd@9-172.31.30.62:22-139.178.89.65:34072.service: Deactivated successfully. Feb 9 09:49:21.480467 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 09:49:21.474000 audit[5877]: CRED_DISP pid=5877 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:21.487861 systemd-logind[1793]: Session 10 logged out. Waiting for processes to exit. Feb 9 09:49:21.497069 kernel: audit: type=1106 audit(1707472161.473:350): pid=5877 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:21.497232 kernel: audit: type=1104 audit(1707472161.474:351): pid=5877 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:21.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.30.62:22-139.178.89.65:34072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:21.499092 systemd-logind[1793]: Removed session 10. Feb 9 09:49:26.510831 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 09:49:26.510956 kernel: audit: type=1130 audit(1707472166.498:353): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.30.62:22-139.178.89.65:34080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:26.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.30.62:22-139.178.89.65:34080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:26.499624 systemd[1]: Started sshd@10-172.31.30.62:22-139.178.89.65:34080.service. Feb 9 09:49:26.683513 sshd[5894]: Accepted publickey for core from 139.178.89.65 port 34080 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:49:26.681000 audit[5894]: USER_ACCT pid=5894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:26.693000 audit[5894]: CRED_ACQ pid=5894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:26.695966 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:49:26.704540 kernel: audit: type=1101 audit(1707472166.681:354): pid=5894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:26.704706 kernel: audit: type=1103 audit(1707472166.693:355): pid=5894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:26.711108 kernel: audit: type=1006 audit(1707472166.693:356): pid=5894 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 9 09:49:26.724862 kernel: audit: type=1300 audit(1707472166.693:356): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3f5f890 a2=3 a3=1 items=0 ppid=1 pid=5894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:26.693000 audit[5894]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3f5f890 a2=3 a3=1 items=0 ppid=1 pid=5894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:26.719698 systemd-logind[1793]: New session 11 of user core. Feb 9 09:49:26.720889 systemd[1]: Started session-11.scope. Feb 9 09:49:26.693000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:26.729419 kernel: audit: type=1327 audit(1707472166.693:356): proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:26.733000 audit[5894]: USER_START pid=5894 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:26.736000 audit[5897]: CRED_ACQ pid=5897 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:26.757466 kernel: audit: type=1105 audit(1707472166.733:357): pid=5894 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:26.757586 kernel: audit: type=1103 audit(1707472166.736:358): pid=5897 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:26.993531 sshd[5894]: pam_unix(sshd:session): session closed for user core Feb 9 09:49:26.995000 audit[5894]: USER_END pid=5894 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:26.999514 systemd[1]: sshd@10-172.31.30.62:22-139.178.89.65:34080.service: Deactivated successfully. Feb 9 09:49:27.001076 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 09:49:26.996000 audit[5894]: CRED_DISP pid=5894 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:27.009919 systemd-logind[1793]: Session 11 logged out. Waiting for processes to exit. Feb 9 09:49:27.012158 systemd-logind[1793]: Removed session 11. Feb 9 09:49:27.017677 kernel: audit: type=1106 audit(1707472166.995:359): pid=5894 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:27.017815 kernel: audit: type=1104 audit(1707472166.996:360): pid=5894 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:26.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.30.62:22-139.178.89.65:34080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:27.025546 systemd[1]: Started sshd@11-172.31.30.62:22-139.178.89.65:34092.service. Feb 9 09:49:27.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.30.62:22-139.178.89.65:34092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:27.205000 audit[5908]: USER_ACCT pid=5908 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:27.205951 sshd[5908]: Accepted publickey for core from 139.178.89.65 port 34092 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:49:27.207000 audit[5908]: CRED_ACQ pid=5908 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:27.207000 audit[5908]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe14eac60 a2=3 a3=1 items=0 ppid=1 pid=5908 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:27.207000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:27.208884 sshd[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:49:27.218065 systemd[1]: Started session-12.scope. Feb 9 09:49:27.218465 systemd-logind[1793]: New session 12 of user core. Feb 9 09:49:27.229000 audit[5908]: USER_START pid=5908 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:27.232000 audit[5911]: CRED_ACQ pid=5911 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:29.657196 sshd[5908]: pam_unix(sshd:session): session closed for user core Feb 9 09:49:29.658000 audit[5908]: USER_END pid=5908 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:29.659000 audit[5908]: CRED_DISP pid=5908 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:29.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.30.62:22-139.178.89.65:34092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:29.663134 systemd[1]: sshd@11-172.31.30.62:22-139.178.89.65:34092.service: Deactivated successfully. Feb 9 09:49:29.666476 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 09:49:29.667427 systemd-logind[1793]: Session 12 logged out. Waiting for processes to exit. Feb 9 09:49:29.670684 systemd-logind[1793]: Removed session 12. Feb 9 09:49:29.681428 systemd[1]: Started sshd@12-172.31.30.62:22-139.178.89.65:54758.service. Feb 9 09:49:29.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.30.62:22-139.178.89.65:54758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:29.868000 audit[5919]: USER_ACCT pid=5919 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:29.870994 sshd[5919]: Accepted publickey for core from 139.178.89.65 port 54758 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:49:29.870000 audit[5919]: CRED_ACQ pid=5919 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:29.870000 audit[5919]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc8e37c60 a2=3 a3=1 items=0 ppid=1 pid=5919 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:29.870000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:29.872232 sshd[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:49:29.881316 systemd[1]: Started session-13.scope. Feb 9 09:49:29.883557 systemd-logind[1793]: New session 13 of user core. Feb 9 09:49:29.894000 audit[5919]: USER_START pid=5919 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:29.897000 audit[5922]: CRED_ACQ pid=5922 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:30.150443 sshd[5919]: pam_unix(sshd:session): session closed for user core Feb 9 09:49:30.151000 audit[5919]: USER_END pid=5919 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:30.152000 audit[5919]: CRED_DISP pid=5919 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:30.155621 systemd-logind[1793]: Session 13 logged out. Waiting for processes to exit. Feb 9 09:49:30.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.30.62:22-139.178.89.65:54758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:30.156606 systemd[1]: sshd@12-172.31.30.62:22-139.178.89.65:54758.service: Deactivated successfully. Feb 9 09:49:30.158994 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 09:49:30.160530 systemd-logind[1793]: Removed session 13. Feb 9 09:49:31.593022 systemd[1]: run-containerd-runc-k8s.io-f32f686d7761da23a9ab57c3c42e55447a31ba7a09ecf86226656ab56f20a996-runc.438npf.mount: Deactivated successfully. Feb 9 09:49:35.177350 systemd[1]: Started sshd@13-172.31.30.62:22-139.178.89.65:54766.service. Feb 9 09:49:35.189446 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 09:49:35.189604 kernel: audit: type=1130 audit(1707472175.177:380): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.30.62:22-139.178.89.65:54766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:35.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.30.62:22-139.178.89.65:54766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:35.343000 audit[5954]: USER_ACCT pid=5954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:35.344421 sshd[5954]: Accepted publickey for core from 139.178.89.65 port 54766 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:49:35.355000 audit[5954]: CRED_ACQ pid=5954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:35.357345 sshd[5954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:49:35.365923 kernel: audit: type=1101 audit(1707472175.343:381): pid=5954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:35.366042 kernel: audit: type=1103 audit(1707472175.355:382): pid=5954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:35.372167 kernel: audit: type=1006 audit(1707472175.356:383): pid=5954 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Feb 9 09:49:35.372607 kernel: audit: type=1300 audit(1707472175.356:383): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7b3c900 a2=3 a3=1 items=0 ppid=1 pid=5954 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:35.356000 audit[5954]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7b3c900 a2=3 a3=1 items=0 ppid=1 pid=5954 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:35.380419 systemd[1]: Started session-14.scope. Feb 9 09:49:35.356000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:35.382918 systemd-logind[1793]: New session 14 of user core. Feb 9 09:49:35.386887 kernel: audit: type=1327 audit(1707472175.356:383): proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:35.397000 audit[5954]: USER_START pid=5954 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:35.409000 audit[5957]: CRED_ACQ pid=5957 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:35.419397 kernel: audit: type=1105 audit(1707472175.397:384): pid=5954 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:35.419496 kernel: audit: type=1103 audit(1707472175.409:385): pid=5957 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:35.654873 sshd[5954]: pam_unix(sshd:session): session closed for user core Feb 9 09:49:35.656000 audit[5954]: USER_END pid=5954 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:35.660118 systemd-logind[1793]: Session 14 logged out. Waiting for processes to exit. Feb 9 09:49:35.663110 systemd[1]: sshd@13-172.31.30.62:22-139.178.89.65:54766.service: Deactivated successfully. Feb 9 09:49:35.664611 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 09:49:35.668114 systemd-logind[1793]: Removed session 14. Feb 9 09:49:35.656000 audit[5954]: CRED_DISP pid=5954 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:35.677815 kernel: audit: type=1106 audit(1707472175.656:386): pid=5954 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:35.677951 kernel: audit: type=1104 audit(1707472175.656:387): pid=5954 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:35.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.30.62:22-139.178.89.65:54766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:40.611284 systemd[1]: run-containerd-runc-k8s.io-d0d021f75c0561ccfa8d4852bb476ae5033708ecf1e483c12555a7fb378080e8-runc.0dUUzf.mount: Deactivated successfully. Feb 9 09:49:40.667370 systemd[1]: run-containerd-runc-k8s.io-10e17c726ec2f1b2d7248174df2fb241eacb177362b6aca86949aaf295b12deb-runc.RKyrRv.mount: Deactivated successfully. Feb 9 09:49:40.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.30.62:22-139.178.89.65:42506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:40.684817 systemd[1]: Started sshd@14-172.31.30.62:22-139.178.89.65:42506.service. Feb 9 09:49:40.687452 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 09:49:40.687529 kernel: audit: type=1130 audit(1707472180.684:389): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.30.62:22-139.178.89.65:42506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:40.871000 audit[6009]: USER_ACCT pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:40.875135 sshd[6009]: Accepted publickey for core from 139.178.89.65 port 42506 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:49:40.876878 sshd[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:49:40.871000 audit[6009]: CRED_ACQ pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:40.892837 kernel: audit: type=1101 audit(1707472180.871:390): pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:40.892911 kernel: audit: type=1103 audit(1707472180.871:391): pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:40.899009 kernel: audit: type=1006 audit(1707472180.871:392): pid=6009 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Feb 9 09:49:40.871000 audit[6009]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff04ec300 a2=3 a3=1 items=0 ppid=1 pid=6009 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:40.907984 systemd-logind[1793]: New session 15 of user core. Feb 9 09:49:40.909129 systemd[1]: Started session-15.scope. Feb 9 09:49:40.871000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:40.911712 kernel: audit: type=1300 audit(1707472180.871:392): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff04ec300 a2=3 a3=1 items=0 ppid=1 pid=6009 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:40.915750 kernel: audit: type=1327 audit(1707472180.871:392): proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:40.922000 audit[6009]: USER_START pid=6009 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:40.922000 audit[6016]: CRED_ACQ pid=6016 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:40.944044 kernel: audit: type=1105 audit(1707472180.922:393): pid=6009 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:40.944147 kernel: audit: type=1103 audit(1707472180.922:394): pid=6016 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:41.174917 sshd[6009]: pam_unix(sshd:session): session closed for user core Feb 9 09:49:41.177000 audit[6009]: USER_END pid=6009 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:41.180499 systemd[1]: sshd@14-172.31.30.62:22-139.178.89.65:42506.service: Deactivated successfully. Feb 9 09:49:41.182015 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 09:49:41.191342 systemd-logind[1793]: Session 15 logged out. Waiting for processes to exit. Feb 9 09:49:41.177000 audit[6009]: CRED_DISP pid=6009 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:41.196255 kernel: audit: type=1106 audit(1707472181.177:395): pid=6009 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:41.194713 systemd-logind[1793]: Removed session 15. Feb 9 09:49:41.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.30.62:22-139.178.89.65:42506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:41.208712 kernel: audit: type=1104 audit(1707472181.177:396): pid=6009 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:43.590276 systemd[1]: run-containerd-runc-k8s.io-29132a5ada4c7efd8b06cf8e8587afd089835cf7869a279a10998ceb12a16738-runc.uzgYrJ.mount: Deactivated successfully. Feb 9 09:49:46.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.30.62:22-139.178.89.65:42514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:46.200091 systemd[1]: Started sshd@15-172.31.30.62:22-139.178.89.65:42514.service. Feb 9 09:49:46.202731 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 09:49:46.202800 kernel: audit: type=1130 audit(1707472186.198:398): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.30.62:22-139.178.89.65:42514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:46.372000 audit[6047]: USER_ACCT pid=6047 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:46.374685 sshd[6047]: Accepted publickey for core from 139.178.89.65 port 42514 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:49:46.385683 kernel: audit: type=1101 audit(1707472186.372:399): pid=6047 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:46.385000 audit[6047]: CRED_ACQ pid=6047 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:46.387753 sshd[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:49:46.403865 kernel: audit: type=1103 audit(1707472186.385:400): pid=6047 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:46.404006 kernel: audit: type=1006 audit(1707472186.385:401): pid=6047 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Feb 9 09:49:46.385000 audit[6047]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4dbecd0 a2=3 a3=1 items=0 ppid=1 pid=6047 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:46.415261 kernel: audit: type=1300 audit(1707472186.385:401): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4dbecd0 a2=3 a3=1 items=0 ppid=1 pid=6047 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:46.385000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:46.419790 kernel: audit: type=1327 audit(1707472186.385:401): proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:46.423823 systemd-logind[1793]: New session 16 of user core. Feb 9 09:49:46.425097 systemd[1]: Started session-16.scope. Feb 9 09:49:46.435000 audit[6047]: USER_START pid=6047 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:46.450006 kernel: audit: type=1105 audit(1707472186.435:402): pid=6047 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:46.448000 audit[6050]: CRED_ACQ pid=6050 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:46.460622 kernel: audit: type=1103 audit(1707472186.448:403): pid=6050 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:46.688862 sshd[6047]: pam_unix(sshd:session): session closed for user core Feb 9 09:49:46.689000 audit[6047]: USER_END pid=6047 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:46.703780 systemd[1]: sshd@15-172.31.30.62:22-139.178.89.65:42514.service: Deactivated successfully. Feb 9 09:49:46.689000 audit[6047]: CRED_DISP pid=6047 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:46.714741 kernel: audit: type=1106 audit(1707472186.689:404): pid=6047 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:46.714832 kernel: audit: type=1104 audit(1707472186.689:405): pid=6047 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:46.705679 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 09:49:46.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.30.62:22-139.178.89.65:42514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:46.716100 systemd-logind[1793]: Session 16 logged out. Waiting for processes to exit. Feb 9 09:49:46.718636 systemd-logind[1793]: Removed session 16. Feb 9 09:49:51.728465 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 09:49:51.728681 kernel: audit: type=1130 audit(1707472191.716:407): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.30.62:22-139.178.89.65:55524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:51.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.30.62:22-139.178.89.65:55524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:51.716745 systemd[1]: Started sshd@16-172.31.30.62:22-139.178.89.65:55524.service. Feb 9 09:49:51.888000 audit[6067]: USER_ACCT pid=6067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:51.889627 sshd[6067]: Accepted publickey for core from 139.178.89.65 port 55524 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:49:51.903642 kernel: audit: type=1101 audit(1707472191.888:408): pid=6067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:51.904000 audit[6067]: CRED_ACQ pid=6067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:51.908321 sshd[6067]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:49:51.916064 kernel: audit: type=1103 audit(1707472191.904:409): pid=6067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:51.916176 kernel: audit: type=1006 audit(1707472191.904:410): pid=6067 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 9 09:49:51.904000 audit[6067]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc888b4e0 a2=3 a3=1 items=0 ppid=1 pid=6067 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:51.932276 kernel: audit: type=1300 audit(1707472191.904:410): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc888b4e0 a2=3 a3=1 items=0 ppid=1 pid=6067 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:51.904000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:51.936212 kernel: audit: type=1327 audit(1707472191.904:410): proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:51.940667 systemd-logind[1793]: New session 17 of user core. Feb 9 09:49:51.943898 systemd[1]: Started session-17.scope. Feb 9 09:49:51.957000 audit[6067]: USER_START pid=6067 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:51.971405 kernel: audit: type=1105 audit(1707472191.957:411): pid=6067 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:51.971554 kernel: audit: type=1103 audit(1707472191.970:412): pid=6070 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:51.970000 audit[6070]: CRED_ACQ pid=6070 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:52.207830 sshd[6067]: pam_unix(sshd:session): session closed for user core Feb 9 09:49:52.209000 audit[6067]: USER_END pid=6067 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:52.210000 audit[6067]: CRED_DISP pid=6067 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:52.222833 systemd[1]: sshd@16-172.31.30.62:22-139.178.89.65:55524.service: Deactivated successfully. Feb 9 09:49:52.224214 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 09:49:52.231251 kernel: audit: type=1106 audit(1707472192.209:413): pid=6067 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:52.231363 kernel: audit: type=1104 audit(1707472192.210:414): pid=6067 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:52.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.30.62:22-139.178.89.65:55524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:52.232690 systemd-logind[1793]: Session 17 logged out. Waiting for processes to exit. Feb 9 09:49:52.240741 systemd[1]: Started sshd@17-172.31.30.62:22-139.178.89.65:55538.service. Feb 9 09:49:52.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.30.62:22-139.178.89.65:55538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:52.242783 systemd-logind[1793]: Removed session 17. Feb 9 09:49:52.433000 audit[6080]: USER_ACCT pid=6080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:52.434308 sshd[6080]: Accepted publickey for core from 139.178.89.65 port 55538 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:49:52.435000 audit[6080]: CRED_ACQ pid=6080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:52.435000 audit[6080]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee639380 a2=3 a3=1 items=0 ppid=1 pid=6080 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:52.435000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:52.437462 sshd[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:49:52.449025 systemd[1]: Started session-18.scope. Feb 9 09:49:52.449468 systemd-logind[1793]: New session 18 of user core. Feb 9 09:49:52.459000 audit[6080]: USER_START pid=6080 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:52.464000 audit[6083]: CRED_ACQ pid=6083 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:52.960886 sshd[6080]: pam_unix(sshd:session): session closed for user core Feb 9 09:49:52.962000 audit[6080]: USER_END pid=6080 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:52.962000 audit[6080]: CRED_DISP pid=6080 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:52.967335 systemd[1]: sshd@17-172.31.30.62:22-139.178.89.65:55538.service: Deactivated successfully. Feb 9 09:49:52.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.30.62:22-139.178.89.65:55538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:52.969071 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 09:49:52.970683 systemd-logind[1793]: Session 18 logged out. Waiting for processes to exit. Feb 9 09:49:52.972880 systemd-logind[1793]: Removed session 18. Feb 9 09:49:52.986423 systemd[1]: Started sshd@18-172.31.30.62:22-139.178.89.65:55554.service. Feb 9 09:49:52.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.30.62:22-139.178.89.65:55554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:53.161000 audit[6091]: USER_ACCT pid=6091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:53.162891 sshd[6091]: Accepted publickey for core from 139.178.89.65 port 55554 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:49:53.164000 audit[6091]: CRED_ACQ pid=6091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:53.164000 audit[6091]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9cf0a30 a2=3 a3=1 items=0 ppid=1 pid=6091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:53.164000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:53.166341 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:49:53.176500 systemd[1]: Started session-19.scope. Feb 9 09:49:53.177548 systemd-logind[1793]: New session 19 of user core. Feb 9 09:49:53.189000 audit[6091]: USER_START pid=6091 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:53.194000 audit[6094]: CRED_ACQ pid=6094 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:54.977723 sshd[6091]: pam_unix(sshd:session): session closed for user core Feb 9 09:49:54.979000 audit[6091]: USER_END pid=6091 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:54.980000 audit[6091]: CRED_DISP pid=6091 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:54.984639 systemd[1]: sshd@18-172.31.30.62:22-139.178.89.65:55554.service: Deactivated successfully. Feb 9 09:49:54.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.30.62:22-139.178.89.65:55554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:54.985914 systemd-logind[1793]: Session 19 logged out. Waiting for processes to exit. Feb 9 09:49:54.987202 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 09:49:54.988903 systemd-logind[1793]: Removed session 19. Feb 9 09:49:55.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.30.62:22-139.178.89.65:55570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:55.013846 systemd[1]: Started sshd@19-172.31.30.62:22-139.178.89.65:55570.service. Feb 9 09:49:55.206000 audit[6137]: NETFILTER_CFG table=filter:139 family=2 entries=18 op=nft_register_rule pid=6137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:55.206000 audit[6137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffe4766020 a2=0 a3=ffffa26046c0 items=0 ppid=3252 pid=6137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:55.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:55.210000 audit[6137]: NETFILTER_CFG table=nat:140 family=2 entries=94 op=nft_register_rule pid=6137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:55.210000 audit[6137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30372 a0=3 a1=ffffe4766020 a2=0 a3=ffffa26046c0 items=0 ppid=3252 pid=6137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:55.210000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:55.229000 audit[6121]: USER_ACCT pid=6121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:55.230757 sshd[6121]: Accepted publickey for core from 139.178.89.65 port 55570 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:49:55.232000 audit[6121]: CRED_ACQ pid=6121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:55.232000 audit[6121]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3de5f70 a2=3 a3=1 items=0 ppid=1 pid=6121 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:55.232000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:55.234117 sshd[6121]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:49:55.243790 systemd[1]: Started session-20.scope. Feb 9 09:49:55.244200 systemd-logind[1793]: New session 20 of user core. Feb 9 09:49:55.259000 audit[6121]: USER_START pid=6121 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:55.261000 audit[6147]: CRED_ACQ pid=6147 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:55.319000 audit[6165]: NETFILTER_CFG table=filter:141 family=2 entries=30 op=nft_register_rule pid=6165 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:55.319000 audit[6165]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffd34a8b30 a2=0 a3=ffff9410f6c0 items=0 ppid=3252 pid=6165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:55.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:55.324000 audit[6165]: NETFILTER_CFG table=nat:142 family=2 entries=94 op=nft_register_rule pid=6165 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:55.324000 audit[6165]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30372 a0=3 a1=ffffd34a8b30 a2=0 a3=ffff9410f6c0 items=0 ppid=3252 pid=6165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:55.324000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:55.847799 sshd[6121]: pam_unix(sshd:session): session closed for user core Feb 9 09:49:55.849000 audit[6121]: USER_END pid=6121 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:55.849000 audit[6121]: CRED_DISP pid=6121 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:55.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.30.62:22-139.178.89.65:55570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:55.853443 systemd[1]: sshd@19-172.31.30.62:22-139.178.89.65:55570.service: Deactivated successfully. Feb 9 09:49:55.856408 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 09:49:55.857457 systemd-logind[1793]: Session 20 logged out. Waiting for processes to exit. Feb 9 09:49:55.860875 systemd-logind[1793]: Removed session 20. Feb 9 09:49:55.871976 systemd[1]: Started sshd@20-172.31.30.62:22-139.178.89.65:55586.service. Feb 9 09:49:55.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.30.62:22-139.178.89.65:55586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:56.043000 audit[6173]: USER_ACCT pid=6173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:56.044625 sshd[6173]: Accepted publickey for core from 139.178.89.65 port 55586 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:49:56.045000 audit[6173]: CRED_ACQ pid=6173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:56.046000 audit[6173]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff7e9130 a2=3 a3=1 items=0 ppid=1 pid=6173 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:56.046000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:49:56.047440 sshd[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:49:56.054261 systemd-logind[1793]: New session 21 of user core. Feb 9 09:49:56.056218 systemd[1]: Started session-21.scope. Feb 9 09:49:56.068000 audit[6173]: USER_START pid=6173 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:56.072000 audit[6176]: CRED_ACQ pid=6176 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:56.306912 sshd[6173]: pam_unix(sshd:session): session closed for user core Feb 9 09:49:56.308000 audit[6173]: USER_END pid=6173 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:56.308000 audit[6173]: CRED_DISP pid=6173 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:49:56.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.30.62:22-139.178.89.65:55586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:56.312971 systemd[1]: sshd@20-172.31.30.62:22-139.178.89.65:55586.service: Deactivated successfully. Feb 9 09:49:56.318111 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 09:49:56.318536 systemd-logind[1793]: Session 21 logged out. Waiting for processes to exit. Feb 9 09:49:56.324548 systemd-logind[1793]: Removed session 21. Feb 9 09:50:01.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.30.62:22-139.178.89.65:42776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:01.335118 systemd[1]: Started sshd@21-172.31.30.62:22-139.178.89.65:42776.service. Feb 9 09:50:01.340606 kernel: kauditd_printk_skb: 57 callbacks suppressed Feb 9 09:50:01.340773 kernel: audit: type=1130 audit(1707472201.334:456): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.30.62:22-139.178.89.65:42776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:01.504000 audit[6200]: USER_ACCT pid=6200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:01.508082 sshd[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:50:01.516305 sshd[6200]: Accepted publickey for core from 139.178.89.65 port 42776 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:50:01.506000 audit[6200]: CRED_ACQ pid=6200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:01.527423 kernel: audit: type=1101 audit(1707472201.504:457): pid=6200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:01.527590 kernel: audit: type=1103 audit(1707472201.506:458): pid=6200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:01.517635 systemd[1]: Started session-22.scope. Feb 9 09:50:01.519539 systemd-logind[1793]: New session 22 of user core. Feb 9 09:50:01.534408 kernel: audit: type=1006 audit(1707472201.506:459): pid=6200 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Feb 9 09:50:01.506000 audit[6200]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7602ca0 a2=3 a3=1 items=0 ppid=1 pid=6200 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:01.545795 kernel: audit: type=1300 audit(1707472201.506:459): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7602ca0 a2=3 a3=1 items=0 ppid=1 pid=6200 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:01.506000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:50:01.549644 kernel: audit: type=1327 audit(1707472201.506:459): proctitle=737368643A20636F7265205B707269765D Feb 9 09:50:01.549000 audit[6200]: USER_START pid=6200 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:01.561496 kernel: audit: type=1105 audit(1707472201.549:460): pid=6200 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:01.563000 audit[6203]: CRED_ACQ pid=6203 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:01.574626 kernel: audit: type=1103 audit(1707472201.563:461): pid=6203 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:01.806220 sshd[6200]: pam_unix(sshd:session): session closed for user core Feb 9 09:50:01.807000 audit[6200]: USER_END pid=6200 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:01.822108 systemd[1]: sshd@21-172.31.30.62:22-139.178.89.65:42776.service: Deactivated successfully. Feb 9 09:50:01.807000 audit[6200]: CRED_DISP pid=6200 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:01.833231 kernel: audit: type=1106 audit(1707472201.807:462): pid=6200 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:01.833381 kernel: audit: type=1104 audit(1707472201.807:463): pid=6200 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:01.833681 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 09:50:01.834552 systemd-logind[1793]: Session 22 logged out. Waiting for processes to exit. Feb 9 09:50:01.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.30.62:22-139.178.89.65:42776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:01.837384 systemd-logind[1793]: Removed session 22. Feb 9 09:50:03.996000 audit[6259]: NETFILTER_CFG table=filter:143 family=2 entries=18 op=nft_register_rule pid=6259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:50:03.996000 audit[6259]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffc47befe0 a2=0 a3=ffff8f95e6c0 items=0 ppid=3252 pid=6259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:03.996000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:50:04.004000 audit[6259]: NETFILTER_CFG table=nat:144 family=2 entries=178 op=nft_register_chain pid=6259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:50:04.004000 audit[6259]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=72324 a0=3 a1=ffffc47befe0 a2=0 a3=ffff8f95e6c0 items=0 ppid=3252 pid=6259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:04.004000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:50:06.833613 systemd[1]: Started sshd@22-172.31.30.62:22-139.178.89.65:42780.service. Feb 9 09:50:06.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.30.62:22-139.178.89.65:42780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:06.837598 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 9 09:50:06.837737 kernel: audit: type=1130 audit(1707472206.832:467): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.30.62:22-139.178.89.65:42780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:07.007000 audit[6261]: USER_ACCT pid=6261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:07.012086 sshd[6261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:50:07.013269 sshd[6261]: Accepted publickey for core from 139.178.89.65 port 42780 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:50:07.009000 audit[6261]: CRED_ACQ pid=6261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:07.029605 kernel: audit: type=1101 audit(1707472207.007:468): pid=6261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:07.029774 kernel: audit: type=1103 audit(1707472207.009:469): pid=6261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:07.035901 kernel: audit: type=1006 audit(1707472207.009:470): pid=6261 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Feb 9 09:50:07.009000 audit[6261]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd1016670 a2=3 a3=1 items=0 ppid=1 pid=6261 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:07.046439 kernel: audit: type=1300 audit(1707472207.009:470): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd1016670 a2=3 a3=1 items=0 ppid=1 pid=6261 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:07.009000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:50:07.051125 kernel: audit: type=1327 audit(1707472207.009:470): proctitle=737368643A20636F7265205B707269765D Feb 9 09:50:07.052082 systemd-logind[1793]: New session 23 of user core. Feb 9 09:50:07.055035 systemd[1]: Started session-23.scope. Feb 9 09:50:07.063000 audit[6261]: USER_START pid=6261 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:07.079599 kernel: audit: type=1105 audit(1707472207.063:471): pid=6261 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:07.080000 audit[6264]: CRED_ACQ pid=6264 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:07.091620 kernel: audit: type=1103 audit(1707472207.080:472): pid=6264 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:07.314133 sshd[6261]: pam_unix(sshd:session): session closed for user core Feb 9 09:50:07.314000 audit[6261]: USER_END pid=6261 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:07.320108 systemd[1]: sshd@22-172.31.30.62:22-139.178.89.65:42780.service: Deactivated successfully. Feb 9 09:50:07.321806 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 09:50:07.316000 audit[6261]: CRED_DISP pid=6261 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:07.329817 systemd-logind[1793]: Session 23 logged out. Waiting for processes to exit. Feb 9 09:50:07.330629 kernel: audit: type=1106 audit(1707472207.314:473): pid=6261 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:07.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.30.62:22-139.178.89.65:42780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:07.341725 kernel: audit: type=1104 audit(1707472207.316:474): pid=6261 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:07.341524 systemd-logind[1793]: Removed session 23. Feb 9 09:50:10.609311 systemd[1]: run-containerd-runc-k8s.io-d0d021f75c0561ccfa8d4852bb476ae5033708ecf1e483c12555a7fb378080e8-runc.jgmxXi.mount: Deactivated successfully. Feb 9 09:50:10.672644 systemd[1]: run-containerd-runc-k8s.io-10e17c726ec2f1b2d7248174df2fb241eacb177362b6aca86949aaf295b12deb-runc.x7dsuz.mount: Deactivated successfully. Feb 9 09:50:12.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.30.62:22-139.178.89.65:54192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:12.339495 systemd[1]: Started sshd@23-172.31.30.62:22-139.178.89.65:54192.service. Feb 9 09:50:12.343733 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 09:50:12.343876 kernel: audit: type=1130 audit(1707472212.339:476): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.30.62:22-139.178.89.65:54192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:12.511000 audit[6313]: USER_ACCT pid=6313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:12.512801 sshd[6313]: Accepted publickey for core from 139.178.89.65 port 54192 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:50:12.516014 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:50:12.514000 audit[6313]: CRED_ACQ pid=6313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:12.532603 kernel: audit: type=1101 audit(1707472212.511:477): pid=6313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:12.532741 kernel: audit: type=1103 audit(1707472212.514:478): pid=6313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:12.539430 kernel: audit: type=1006 audit(1707472212.514:479): pid=6313 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Feb 9 09:50:12.514000 audit[6313]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff30872c0 a2=3 a3=1 items=0 ppid=1 pid=6313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:12.550430 kernel: audit: type=1300 audit(1707472212.514:479): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff30872c0 a2=3 a3=1 items=0 ppid=1 pid=6313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:12.554094 systemd-logind[1793]: New session 24 of user core. Feb 9 09:50:12.556223 systemd[1]: Started session-24.scope. Feb 9 09:50:12.514000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:50:12.560383 kernel: audit: type=1327 audit(1707472212.514:479): proctitle=737368643A20636F7265205B707269765D Feb 9 09:50:12.568000 audit[6313]: USER_START pid=6313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:12.569000 audit[6316]: CRED_ACQ pid=6316 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:12.592143 kernel: audit: type=1105 audit(1707472212.568:480): pid=6313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:12.592266 kernel: audit: type=1103 audit(1707472212.569:481): pid=6316 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:12.798652 sshd[6313]: pam_unix(sshd:session): session closed for user core Feb 9 09:50:12.800000 audit[6313]: USER_END pid=6313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:12.813240 systemd[1]: sshd@23-172.31.30.62:22-139.178.89.65:54192.service: Deactivated successfully. Feb 9 09:50:12.814966 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 09:50:12.806000 audit[6313]: CRED_DISP pid=6313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:12.827098 kernel: audit: type=1106 audit(1707472212.800:482): pid=6313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:12.827254 kernel: audit: type=1104 audit(1707472212.806:483): pid=6313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:12.817622 systemd-logind[1793]: Session 24 logged out. Waiting for processes to exit. Feb 9 09:50:12.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.30.62:22-139.178.89.65:54192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:12.828892 systemd-logind[1793]: Removed session 24. Feb 9 09:50:13.621426 systemd[1]: run-containerd-runc-k8s.io-29132a5ada4c7efd8b06cf8e8587afd089835cf7869a279a10998ceb12a16738-runc.ZTbogt.mount: Deactivated successfully. Feb 9 09:50:17.838446 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 09:50:17.838714 kernel: audit: type=1130 audit(1707472217.826:485): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.30.62:22-139.178.89.65:54200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:17.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.30.62:22-139.178.89.65:54200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:17.827113 systemd[1]: Started sshd@24-172.31.30.62:22-139.178.89.65:54200.service. Feb 9 09:50:17.997000 audit[6366]: USER_ACCT pid=6366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:18.000764 sshd[6366]: Accepted publickey for core from 139.178.89.65 port 54200 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:50:18.008620 kernel: audit: type=1101 audit(1707472217.997:486): pid=6366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:18.010000 audit[6366]: CRED_ACQ pid=6366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:18.011936 sshd[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:50:18.029191 kernel: audit: type=1103 audit(1707472218.010:487): pid=6366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:18.029343 kernel: audit: type=1006 audit(1707472218.010:488): pid=6366 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 9 09:50:18.010000 audit[6366]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdd185c10 a2=3 a3=1 items=0 ppid=1 pid=6366 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:18.040340 kernel: audit: type=1300 audit(1707472218.010:488): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdd185c10 a2=3 a3=1 items=0 ppid=1 pid=6366 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:18.010000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:50:18.046719 kernel: audit: type=1327 audit(1707472218.010:488): proctitle=737368643A20636F7265205B707269765D Feb 9 09:50:18.052712 systemd-logind[1793]: New session 25 of user core. Feb 9 09:50:18.053202 systemd[1]: Started session-25.scope. Feb 9 09:50:18.063000 audit[6366]: USER_START pid=6366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:18.076000 audit[6369]: CRED_ACQ pid=6369 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:18.086616 kernel: audit: type=1105 audit(1707472218.063:489): pid=6366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:18.086732 kernel: audit: type=1103 audit(1707472218.076:490): pid=6369 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:18.310619 sshd[6366]: pam_unix(sshd:session): session closed for user core Feb 9 09:50:18.312000 audit[6366]: USER_END pid=6366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:18.316975 systemd[1]: sshd@24-172.31.30.62:22-139.178.89.65:54200.service: Deactivated successfully. Feb 9 09:50:18.318482 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 09:50:18.312000 audit[6366]: CRED_DISP pid=6366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:18.334031 kernel: audit: type=1106 audit(1707472218.312:491): pid=6366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:18.334176 kernel: audit: type=1104 audit(1707472218.312:492): pid=6366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:18.334080 systemd-logind[1793]: Session 25 logged out. Waiting for processes to exit. Feb 9 09:50:18.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.30.62:22-139.178.89.65:54200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:18.335877 systemd-logind[1793]: Removed session 25. Feb 9 09:50:23.341439 systemd[1]: Started sshd@25-172.31.30.62:22-139.178.89.65:56372.service. Feb 9 09:50:23.352316 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 09:50:23.352450 kernel: audit: type=1130 audit(1707472223.340:494): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.30.62:22-139.178.89.65:56372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:23.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.30.62:22-139.178.89.65:56372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:23.528000 audit[6382]: USER_ACCT pid=6382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:23.530614 sshd[6382]: Accepted publickey for core from 139.178.89.65 port 56372 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:50:23.542616 kernel: audit: type=1101 audit(1707472223.528:495): pid=6382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:23.541000 audit[6382]: CRED_ACQ pid=6382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:23.544788 sshd[6382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:50:23.558944 kernel: audit: type=1103 audit(1707472223.541:496): pid=6382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:23.559139 kernel: audit: type=1006 audit(1707472223.542:497): pid=6382 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Feb 9 09:50:23.542000 audit[6382]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe5de0ed0 a2=3 a3=1 items=0 ppid=1 pid=6382 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:23.565201 systemd[1]: Started session-26.scope. Feb 9 09:50:23.566638 systemd-logind[1793]: New session 26 of user core. Feb 9 09:50:23.572201 kernel: audit: type=1300 audit(1707472223.542:497): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe5de0ed0 a2=3 a3=1 items=0 ppid=1 pid=6382 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:23.572310 kernel: audit: type=1327 audit(1707472223.542:497): proctitle=737368643A20636F7265205B707269765D Feb 9 09:50:23.542000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:50:23.587000 audit[6382]: USER_START pid=6382 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:23.601000 audit[6387]: CRED_ACQ pid=6387 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:23.613616 kernel: audit: type=1105 audit(1707472223.587:498): pid=6382 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:23.613791 kernel: audit: type=1103 audit(1707472223.601:499): pid=6387 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:23.846666 sshd[6382]: pam_unix(sshd:session): session closed for user core Feb 9 09:50:23.847000 audit[6382]: USER_END pid=6382 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:23.857641 systemd[1]: sshd@25-172.31.30.62:22-139.178.89.65:56372.service: Deactivated successfully. Feb 9 09:50:23.859693 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 09:50:23.853000 audit[6382]: CRED_DISP pid=6382 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:23.870681 kernel: audit: type=1106 audit(1707472223.847:500): pid=6382 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:23.870809 kernel: audit: type=1104 audit(1707472223.853:501): pid=6382 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:23.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.30.62:22-139.178.89.65:56372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:23.872092 systemd-logind[1793]: Session 26 logged out. Waiting for processes to exit. Feb 9 09:50:23.874803 systemd-logind[1793]: Removed session 26. Feb 9 09:50:28.883034 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 09:50:28.883250 kernel: audit: type=1130 audit(1707472228.871:503): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.30.62:22-139.178.89.65:33114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:28.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.30.62:22-139.178.89.65:33114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:28.871748 systemd[1]: Started sshd@26-172.31.30.62:22-139.178.89.65:33114.service. Feb 9 09:50:29.042000 audit[6398]: USER_ACCT pid=6398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:29.046135 sshd[6398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:50:29.050362 sshd[6398]: Accepted publickey for core from 139.178.89.65 port 33114 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:50:29.044000 audit[6398]: CRED_ACQ pid=6398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:29.065317 kernel: audit: type=1101 audit(1707472229.042:504): pid=6398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:29.065453 kernel: audit: type=1103 audit(1707472229.044:505): pid=6398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:29.073878 kernel: audit: type=1006 audit(1707472229.044:506): pid=6398 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Feb 9 09:50:29.074028 kernel: audit: type=1300 audit(1707472229.044:506): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4b9c980 a2=3 a3=1 items=0 ppid=1 pid=6398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:29.044000 audit[6398]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4b9c980 a2=3 a3=1 items=0 ppid=1 pid=6398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:29.091246 systemd[1]: Started session-27.scope. Feb 9 09:50:29.044000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:50:29.092096 systemd-logind[1793]: New session 27 of user core. Feb 9 09:50:29.098469 kernel: audit: type=1327 audit(1707472229.044:506): proctitle=737368643A20636F7265205B707269765D Feb 9 09:50:29.121000 audit[6398]: USER_START pid=6398 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:29.125000 audit[6402]: CRED_ACQ pid=6402 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:29.142811 kernel: audit: type=1105 audit(1707472229.121:507): pid=6398 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:29.142930 kernel: audit: type=1103 audit(1707472229.125:508): pid=6402 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:29.348901 sshd[6398]: pam_unix(sshd:session): session closed for user core Feb 9 09:50:29.350000 audit[6398]: USER_END pid=6398 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:29.356498 systemd[1]: sshd@26-172.31.30.62:22-139.178.89.65:33114.service: Deactivated successfully. Feb 9 09:50:29.358707 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 09:50:29.365094 systemd-logind[1793]: Session 27 logged out. Waiting for processes to exit. Feb 9 09:50:29.353000 audit[6398]: CRED_DISP pid=6398 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:29.375644 kernel: audit: type=1106 audit(1707472229.350:509): pid=6398 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:29.375869 kernel: audit: type=1104 audit(1707472229.353:510): pid=6398 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:29.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.30.62:22-139.178.89.65:33114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:29.379000 systemd-logind[1793]: Removed session 27. Feb 9 09:50:31.598674 systemd[1]: run-containerd-runc-k8s.io-f32f686d7761da23a9ab57c3c42e55447a31ba7a09ecf86226656ab56f20a996-runc.Lp2Sn5.mount: Deactivated successfully. Feb 9 09:50:34.380017 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 09:50:34.380168 kernel: audit: type=1130 audit(1707472234.377:512): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.30.62:22-139.178.89.65:33128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:34.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.30.62:22-139.178.89.65:33128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:34.377635 systemd[1]: Started sshd@27-172.31.30.62:22-139.178.89.65:33128.service. Feb 9 09:50:34.553000 audit[6433]: USER_ACCT pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:34.554356 sshd[6433]: Accepted publickey for core from 139.178.89.65 port 33128 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:50:34.557812 sshd[6433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:50:34.556000 audit[6433]: CRED_ACQ pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:34.573892 kernel: audit: type=1101 audit(1707472234.553:513): pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:34.573972 kernel: audit: type=1103 audit(1707472234.556:514): pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:34.580132 kernel: audit: type=1006 audit(1707472234.556:515): pid=6433 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Feb 9 09:50:34.556000 audit[6433]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0b255a0 a2=3 a3=1 items=0 ppid=1 pid=6433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:34.590990 kernel: audit: type=1300 audit(1707472234.556:515): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0b255a0 a2=3 a3=1 items=0 ppid=1 pid=6433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:50:34.592763 kernel: audit: type=1327 audit(1707472234.556:515): proctitle=737368643A20636F7265205B707269765D Feb 9 09:50:34.556000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:50:34.602521 systemd[1]: Started session-28.scope. Feb 9 09:50:34.604960 systemd-logind[1793]: New session 28 of user core. Feb 9 09:50:34.621000 audit[6433]: USER_START pid=6433 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:34.633000 audit[6437]: CRED_ACQ pid=6437 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:34.643923 kernel: audit: type=1105 audit(1707472234.621:516): pid=6433 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:34.644226 kernel: audit: type=1103 audit(1707472234.633:517): pid=6437 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:34.866918 sshd[6433]: pam_unix(sshd:session): session closed for user core Feb 9 09:50:34.868000 audit[6433]: USER_END pid=6433 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:34.881389 systemd[1]: sshd@27-172.31.30.62:22-139.178.89.65:33128.service: Deactivated successfully. Feb 9 09:50:34.868000 audit[6433]: CRED_DISP pid=6433 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:34.892577 kernel: audit: type=1106 audit(1707472234.868:518): pid=6433 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:34.892731 kernel: audit: type=1104 audit(1707472234.868:519): pid=6433 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 09:50:34.892870 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 09:50:34.893988 systemd-logind[1793]: Session 28 logged out. Waiting for processes to exit. Feb 9 09:50:34.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.30.62:22-139.178.89.65:33128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:34.896353 systemd-logind[1793]: Removed session 28. Feb 9 09:50:40.679289 systemd[1]: run-containerd-runc-k8s.io-10e17c726ec2f1b2d7248174df2fb241eacb177362b6aca86949aaf295b12deb-runc.Y82fpq.mount: Deactivated successfully. Feb 9 09:50:43.589188 systemd[1]: run-containerd-runc-k8s.io-29132a5ada4c7efd8b06cf8e8587afd089835cf7869a279a10998ceb12a16738-runc.EAVOyl.mount: Deactivated successfully. Feb 9 09:50:48.658987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-315c98a85a755f3a9e8f19c5ba2869601673768b40be27c8df9b446923999c0f-rootfs.mount: Deactivated successfully. Feb 9 09:50:48.661767 env[1801]: time="2024-02-09T09:50:48.661703355Z" level=info msg="shim disconnected" id=315c98a85a755f3a9e8f19c5ba2869601673768b40be27c8df9b446923999c0f Feb 9 09:50:48.662591 env[1801]: time="2024-02-09T09:50:48.662514030Z" level=warning msg="cleaning up after shim disconnected" id=315c98a85a755f3a9e8f19c5ba2869601673768b40be27c8df9b446923999c0f namespace=k8s.io Feb 9 09:50:48.662767 env[1801]: time="2024-02-09T09:50:48.662738034Z" level=info msg="cleaning up dead shim" Feb 9 09:50:48.677973 env[1801]: time="2024-02-09T09:50:48.677916912Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:50:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6520 runtime=io.containerd.runc.v2\n" Feb 9 09:50:48.811350 kubelet[3093]: I0209 09:50:48.810876 3093 scope.go:115] "RemoveContainer" containerID="315c98a85a755f3a9e8f19c5ba2869601673768b40be27c8df9b446923999c0f" Feb 9 09:50:48.815421 env[1801]: time="2024-02-09T09:50:48.815348127Z" level=info msg="CreateContainer within sandbox \"019bedce69f50ba86d451ffc6eb829ffe4cfeb6c7ea73072dc90b6bed1954e29\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Feb 9 09:50:48.843799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339964593.mount: Deactivated successfully. Feb 9 09:50:48.850001 env[1801]: time="2024-02-09T09:50:48.849922296Z" level=info msg="CreateContainer within sandbox \"019bedce69f50ba86d451ffc6eb829ffe4cfeb6c7ea73072dc90b6bed1954e29\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2d2ed2264fe339815ab7c76b83be529d3766f259332bc0c782606b58c9ee15f5\"" Feb 9 09:50:48.850936 env[1801]: time="2024-02-09T09:50:48.850875428Z" level=info msg="StartContainer for \"2d2ed2264fe339815ab7c76b83be529d3766f259332bc0c782606b58c9ee15f5\"" Feb 9 09:50:48.960489 env[1801]: time="2024-02-09T09:50:48.959926305Z" level=info msg="StartContainer for \"2d2ed2264fe339815ab7c76b83be529d3766f259332bc0c782606b58c9ee15f5\" returns successfully" Feb 9 09:50:49.826111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c854234f199e2ed200562cccc52746d75160a4eee23f2fa96baf021294ae21bc-rootfs.mount: Deactivated successfully. Feb 9 09:50:49.828693 env[1801]: time="2024-02-09T09:50:49.828631628Z" level=info msg="shim disconnected" id=c854234f199e2ed200562cccc52746d75160a4eee23f2fa96baf021294ae21bc Feb 9 09:50:49.829365 env[1801]: time="2024-02-09T09:50:49.829325815Z" level=warning msg="cleaning up after shim disconnected" id=c854234f199e2ed200562cccc52746d75160a4eee23f2fa96baf021294ae21bc namespace=k8s.io Feb 9 09:50:49.829485 env[1801]: time="2024-02-09T09:50:49.829456296Z" level=info msg="cleaning up dead shim" Feb 9 09:50:49.845350 env[1801]: time="2024-02-09T09:50:49.845294672Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:50:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6581 runtime=io.containerd.runc.v2\n" Feb 9 09:50:50.822449 kubelet[3093]: I0209 09:50:50.822409 3093 scope.go:115] "RemoveContainer" containerID="c854234f199e2ed200562cccc52746d75160a4eee23f2fa96baf021294ae21bc" Feb 9 09:50:50.827974 env[1801]: time="2024-02-09T09:50:50.827919943Z" level=info msg="CreateContainer within sandbox \"140e509204bf0d902409090f3b0e506051677874b7fd09d1d463d59a2b65f86e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 09:50:50.861173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount178691245.mount: Deactivated successfully. Feb 9 09:50:50.867716 env[1801]: time="2024-02-09T09:50:50.867653001Z" level=info msg="CreateContainer within sandbox \"140e509204bf0d902409090f3b0e506051677874b7fd09d1d463d59a2b65f86e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"436bdd05247f2576602b88b9d7a1fc0923b6a34dfd9f5c71f1eb2ce917d6dfb5\"" Feb 9 09:50:50.869023 env[1801]: time="2024-02-09T09:50:50.868976413Z" level=info msg="StartContainer for \"436bdd05247f2576602b88b9d7a1fc0923b6a34dfd9f5c71f1eb2ce917d6dfb5\"" Feb 9 09:50:50.869549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597818631.mount: Deactivated successfully. Feb 9 09:50:51.007032 env[1801]: time="2024-02-09T09:50:51.005451174Z" level=info msg="StartContainer for \"436bdd05247f2576602b88b9d7a1fc0923b6a34dfd9f5c71f1eb2ce917d6dfb5\" returns successfully" Feb 9 09:50:51.929505 kubelet[3093]: E0209 09:50:51.929448 3093 controller.go:189] failed to update lease, error: Put "https://172.31.30.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-62?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 09:50:53.184740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20e8b47edce09d299df129f41d4bc60e6e26619ecd67621feec7967186d4bf11-rootfs.mount: Deactivated successfully. Feb 9 09:50:53.189429 env[1801]: time="2024-02-09T09:50:53.189358043Z" level=info msg="shim disconnected" id=20e8b47edce09d299df129f41d4bc60e6e26619ecd67621feec7967186d4bf11 Feb 9 09:50:53.190057 env[1801]: time="2024-02-09T09:50:53.189432452Z" level=warning msg="cleaning up after shim disconnected" id=20e8b47edce09d299df129f41d4bc60e6e26619ecd67621feec7967186d4bf11 namespace=k8s.io Feb 9 09:50:53.190057 env[1801]: time="2024-02-09T09:50:53.189456391Z" level=info msg="cleaning up dead shim" Feb 9 09:50:53.209093 env[1801]: time="2024-02-09T09:50:53.209036570Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:50:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6641 runtime=io.containerd.runc.v2\n" Feb 9 09:50:53.835454 kubelet[3093]: I0209 09:50:53.835395 3093 scope.go:115] "RemoveContainer" containerID="20e8b47edce09d299df129f41d4bc60e6e26619ecd67621feec7967186d4bf11" Feb 9 09:50:53.839379 env[1801]: time="2024-02-09T09:50:53.839292678Z" level=info msg="CreateContainer within sandbox \"35773c57b99d68c5f9d8697583779ddda7c809aae91d98fba261db9f69f673aa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 09:50:53.869731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2228543811.mount: Deactivated successfully. Feb 9 09:50:53.882895 env[1801]: time="2024-02-09T09:50:53.882809672Z" level=info msg="CreateContainer within sandbox \"35773c57b99d68c5f9d8697583779ddda7c809aae91d98fba261db9f69f673aa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"73c20c68831aa1e923f2729f8e7d14269e07fcebec284417a13a4bd3e777fdfe\"" Feb 9 09:50:53.883742 env[1801]: time="2024-02-09T09:50:53.883678407Z" level=info msg="StartContainer for \"73c20c68831aa1e923f2729f8e7d14269e07fcebec284417a13a4bd3e777fdfe\"" Feb 9 09:50:54.008608 env[1801]: time="2024-02-09T09:50:54.007755559Z" level=info msg="StartContainer for \"73c20c68831aa1e923f2729f8e7d14269e07fcebec284417a13a4bd3e777fdfe\" returns successfully" Feb 9 09:51:01.594451 systemd[1]: run-containerd-runc-k8s.io-f32f686d7761da23a9ab57c3c42e55447a31ba7a09ecf86226656ab56f20a996-runc.PoIqHG.mount: Deactivated successfully. Feb 9 09:51:01.931170 kubelet[3093]: E0209 09:51:01.931019 3093 controller.go:189] failed to update lease, error: Put "https://172.31.30.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-62?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)