Feb 9 19:14:36.949488 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 9 19:14:36.949525 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 19:14:36.949547 kernel: efi: EFI v2.70 by EDK II Feb 9 19:14:36.949562 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71a8cf98 Feb 9 19:14:36.949575 kernel: ACPI: Early table checksum verification disabled Feb 9 19:14:36.949589 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 9 19:14:36.949604 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 9 19:14:36.949619 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 19:14:36.949632 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 9 19:14:36.949646 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 19:14:36.949664 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 9 19:14:36.949677 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 9 19:14:36.949691 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 9 19:14:36.949705 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 19:14:36.949721 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 9 19:14:36.949740 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 9 19:14:36.949775 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 9 19:14:36.949793 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 9 19:14:36.949808 kernel: printk: bootconsole [uart0] enabled Feb 9 19:14:36.949823 kernel: NUMA: Failed to initialise from firmware Feb 9 19:14:36.949838 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 19:14:36.949852 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Feb 9 19:14:36.949867 kernel: Zone ranges: Feb 9 19:14:36.949881 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 9 19:14:36.949895 kernel: DMA32 empty Feb 9 19:14:36.949910 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 9 19:14:36.949929 kernel: Movable zone start for each node Feb 9 19:14:36.949943 kernel: Early memory node ranges Feb 9 19:14:36.949958 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Feb 9 19:14:36.949972 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 9 19:14:36.949987 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 9 19:14:36.950001 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 9 19:14:36.950015 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 9 19:14:36.950029 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 9 19:14:36.950043 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 19:14:36.950058 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 9 19:14:36.950072 kernel: psci: probing for conduit method from ACPI. Feb 9 19:14:36.950086 kernel: psci: PSCIv1.0 detected in firmware. Feb 9 19:14:36.950104 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 19:14:36.950119 kernel: psci: Trusted OS migration not required Feb 9 19:14:36.950140 kernel: psci: SMC Calling Convention v1.1 Feb 9 19:14:36.950155 kernel: ACPI: SRAT not present Feb 9 19:14:36.950171 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 19:14:36.950190 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 19:14:36.950205 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 19:14:36.950220 kernel: Detected PIPT I-cache on CPU0 Feb 9 19:14:36.950235 kernel: CPU features: detected: GIC system register CPU interface Feb 9 19:14:36.950250 kernel: CPU features: detected: Spectre-v2 Feb 9 19:14:36.950265 kernel: CPU features: detected: Spectre-v3a Feb 9 19:14:36.950280 kernel: CPU features: detected: Spectre-BHB Feb 9 19:14:36.950295 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 19:14:36.950310 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 19:14:36.950325 kernel: CPU features: detected: ARM erratum 1742098 Feb 9 19:14:36.950340 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 9 19:14:36.950359 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 9 19:14:36.950374 kernel: Policy zone: Normal Feb 9 19:14:36.950391 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 19:14:36.950407 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:14:36.950422 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:14:36.950438 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:14:36.950453 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:14:36.950468 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 9 19:14:36.950484 kernel: Memory: 3826316K/4030464K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 204148K reserved, 0K cma-reserved) Feb 9 19:14:36.950499 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:14:36.950517 kernel: trace event string verifier disabled Feb 9 19:14:36.950533 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 19:14:36.950549 kernel: rcu: RCU event tracing is enabled. Feb 9 19:14:36.950564 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:14:36.950580 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 19:14:36.950595 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:14:36.950610 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:14:36.950626 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:14:36.950640 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 19:14:36.950655 kernel: GICv3: 96 SPIs implemented Feb 9 19:14:36.950670 kernel: GICv3: 0 Extended SPIs implemented Feb 9 19:14:36.950685 kernel: GICv3: Distributor has no Range Selector support Feb 9 19:14:36.950704 kernel: Root IRQ handler: gic_handle_irq Feb 9 19:14:36.950719 kernel: GICv3: 16 PPIs implemented Feb 9 19:14:36.950734 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 9 19:14:36.950761 kernel: ACPI: SRAT not present Feb 9 19:14:36.950782 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 9 19:14:36.950798 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 19:14:36.950813 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Feb 9 19:14:36.950829 kernel: GICv3: using LPI property table @0x00000004000c0000 Feb 9 19:14:36.950844 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 9 19:14:36.950859 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Feb 9 19:14:36.950874 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 9 19:14:36.950894 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 9 19:14:36.950910 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 9 19:14:36.950925 kernel: Console: colour dummy device 80x25 Feb 9 19:14:36.950940 kernel: printk: console [tty1] enabled Feb 9 19:14:36.950956 kernel: ACPI: Core revision 20210730 Feb 9 19:14:36.950972 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 9 19:14:36.950988 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:14:36.951003 kernel: LSM: Security Framework initializing Feb 9 19:14:36.951019 kernel: SELinux: Initializing. Feb 9 19:14:36.951034 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:14:36.951054 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:14:36.951069 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:14:36.951085 kernel: Platform MSI: ITS@0x10080000 domain created Feb 9 19:14:36.951100 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 9 19:14:36.951116 kernel: Remapping and enabling EFI services. Feb 9 19:14:36.951131 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:14:36.951147 kernel: Detected PIPT I-cache on CPU1 Feb 9 19:14:36.951163 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 9 19:14:36.951178 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Feb 9 19:14:36.951198 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 9 19:14:36.951213 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:14:36.951249 kernel: SMP: Total of 2 processors activated. Feb 9 19:14:36.951265 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 19:14:36.951280 kernel: CPU features: detected: 32-bit EL1 Support Feb 9 19:14:36.951296 kernel: CPU features: detected: CRC32 instructions Feb 9 19:14:36.951311 kernel: CPU: All CPU(s) started at EL1 Feb 9 19:14:36.951326 kernel: alternatives: patching kernel code Feb 9 19:14:36.951341 kernel: devtmpfs: initialized Feb 9 19:14:36.951362 kernel: KASLR disabled due to lack of seed Feb 9 19:14:36.951378 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:14:36.951394 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:14:36.951420 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:14:36.951440 kernel: SMBIOS 3.0.0 present. Feb 9 19:14:36.951456 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 9 19:14:36.951472 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:14:36.951487 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 19:14:36.951504 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 19:14:36.951520 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 19:14:36.951536 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:14:36.951552 kernel: audit: type=2000 audit(0.248:1): state=initialized audit_enabled=0 res=1 Feb 9 19:14:36.951572 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:14:36.951589 kernel: cpuidle: using governor menu Feb 9 19:14:36.951605 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 19:14:36.951621 kernel: ASID allocator initialised with 32768 entries Feb 9 19:14:36.951637 kernel: ACPI: bus type PCI registered Feb 9 19:14:36.951658 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:14:36.951674 kernel: Serial: AMBA PL011 UART driver Feb 9 19:14:36.951690 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:14:36.951706 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 19:14:36.951722 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:14:36.951739 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 19:14:36.951771 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:14:36.951790 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 19:14:36.951806 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:14:36.951827 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:14:36.951844 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:14:36.951860 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:14:36.951876 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:14:36.951892 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:14:36.951907 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:14:36.951924 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:14:36.951940 kernel: ACPI: Interpreter enabled Feb 9 19:14:36.951956 kernel: ACPI: Using GIC for interrupt routing Feb 9 19:14:36.951975 kernel: ACPI: MCFG table detected, 1 entries Feb 9 19:14:36.951992 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 9 19:14:36.952281 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:14:36.952481 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 19:14:36.952677 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 19:14:36.957999 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 9 19:14:36.958219 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 9 19:14:36.958250 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 9 19:14:36.958268 kernel: acpiphp: Slot [1] registered Feb 9 19:14:36.958285 kernel: acpiphp: Slot [2] registered Feb 9 19:14:36.958301 kernel: acpiphp: Slot [3] registered Feb 9 19:14:36.958318 kernel: acpiphp: Slot [4] registered Feb 9 19:14:36.958333 kernel: acpiphp: Slot [5] registered Feb 9 19:14:36.958350 kernel: acpiphp: Slot [6] registered Feb 9 19:14:36.958365 kernel: acpiphp: Slot [7] registered Feb 9 19:14:36.958382 kernel: acpiphp: Slot [8] registered Feb 9 19:14:36.958402 kernel: acpiphp: Slot [9] registered Feb 9 19:14:36.958419 kernel: acpiphp: Slot [10] registered Feb 9 19:14:36.958435 kernel: acpiphp: Slot [11] registered Feb 9 19:14:36.958451 kernel: acpiphp: Slot [12] registered Feb 9 19:14:36.958466 kernel: acpiphp: Slot [13] registered Feb 9 19:14:36.958483 kernel: acpiphp: Slot [14] registered Feb 9 19:14:36.958499 kernel: acpiphp: Slot [15] registered Feb 9 19:14:36.958514 kernel: acpiphp: Slot [16] registered Feb 9 19:14:36.958531 kernel: acpiphp: Slot [17] registered Feb 9 19:14:36.958546 kernel: acpiphp: Slot [18] registered Feb 9 19:14:36.958566 kernel: acpiphp: Slot [19] registered Feb 9 19:14:36.958582 kernel: acpiphp: Slot [20] registered Feb 9 19:14:36.958598 kernel: acpiphp: Slot [21] registered Feb 9 19:14:36.958614 kernel: acpiphp: Slot [22] registered Feb 9 19:14:36.958630 kernel: acpiphp: Slot [23] registered Feb 9 19:14:36.958647 kernel: acpiphp: Slot [24] registered Feb 9 19:14:36.958663 kernel: acpiphp: Slot [25] registered Feb 9 19:14:36.958679 kernel: acpiphp: Slot [26] registered Feb 9 19:14:36.958695 kernel: acpiphp: Slot [27] registered Feb 9 19:14:36.958715 kernel: acpiphp: Slot [28] registered Feb 9 19:14:36.958731 kernel: acpiphp: Slot [29] registered Feb 9 19:14:36.958747 kernel: acpiphp: Slot [30] registered Feb 9 19:14:36.958785 kernel: acpiphp: Slot [31] registered Feb 9 19:14:36.958803 kernel: PCI host bridge to bus 0000:00 Feb 9 19:14:36.959005 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 9 19:14:36.959189 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 19:14:36.959391 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 9 19:14:36.959578 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 9 19:14:36.959832 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 9 19:14:36.960071 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 9 19:14:36.960289 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 9 19:14:36.960508 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 19:14:36.960715 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 9 19:14:36.960948 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 19:14:36.961171 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 19:14:36.961388 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 9 19:14:36.961594 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 9 19:14:36.961822 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 9 19:14:36.962030 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 19:14:36.962234 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 9 19:14:36.962444 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 9 19:14:36.962641 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 9 19:14:36.966985 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 9 19:14:36.967280 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 9 19:14:36.967505 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 9 19:14:36.967696 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 19:14:36.967918 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 9 19:14:36.967951 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 19:14:36.967970 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 19:14:36.967988 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 19:14:36.968004 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 19:14:36.968021 kernel: iommu: Default domain type: Translated Feb 9 19:14:36.968037 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 19:14:36.968054 kernel: vgaarb: loaded Feb 9 19:14:36.968070 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:14:36.968087 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:14:36.968107 kernel: PTP clock support registered Feb 9 19:14:36.968124 kernel: Registered efivars operations Feb 9 19:14:36.968140 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 19:14:36.968156 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:14:36.968173 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:14:36.968189 kernel: pnp: PnP ACPI init Feb 9 19:14:36.968421 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 9 19:14:36.968448 kernel: pnp: PnP ACPI: found 1 devices Feb 9 19:14:36.968466 kernel: NET: Registered PF_INET protocol family Feb 9 19:14:36.968488 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:14:36.968505 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 19:14:36.968522 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:14:36.968539 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:14:36.968555 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 19:14:36.968572 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 19:14:36.968588 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:14:36.968605 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:14:36.968622 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:14:36.968642 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:14:36.968659 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 9 19:14:36.968675 kernel: kvm [1]: HYP mode not available Feb 9 19:14:36.968691 kernel: Initialise system trusted keyrings Feb 9 19:14:36.968708 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 19:14:36.968724 kernel: Key type asymmetric registered Feb 9 19:14:36.968740 kernel: Asymmetric key parser 'x509' registered Feb 9 19:14:36.968780 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:14:36.978011 kernel: io scheduler mq-deadline registered Feb 9 19:14:36.978045 kernel: io scheduler kyber registered Feb 9 19:14:36.978062 kernel: io scheduler bfq registered Feb 9 19:14:36.978362 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 9 19:14:36.978396 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 19:14:36.978413 kernel: ACPI: button: Power Button [PWRB] Feb 9 19:14:36.978431 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:14:36.978448 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 9 19:14:36.978673 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 9 19:14:36.978707 kernel: printk: console [ttyS0] disabled Feb 9 19:14:36.978725 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 9 19:14:36.978742 kernel: printk: console [ttyS0] enabled Feb 9 19:14:36.978934 kernel: printk: bootconsole [uart0] disabled Feb 9 19:14:36.978954 kernel: thunder_xcv, ver 1.0 Feb 9 19:14:36.978970 kernel: thunder_bgx, ver 1.0 Feb 9 19:14:36.978986 kernel: nicpf, ver 1.0 Feb 9 19:14:36.979002 kernel: nicvf, ver 1.0 Feb 9 19:14:36.979259 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 19:14:36.979468 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T19:14:36 UTC (1707506076) Feb 9 19:14:36.979493 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:14:36.979510 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:14:36.979526 kernel: Segment Routing with IPv6 Feb 9 19:14:36.979542 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:14:36.979559 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:14:36.979575 kernel: Key type dns_resolver registered Feb 9 19:14:36.979591 kernel: registered taskstats version 1 Feb 9 19:14:36.979613 kernel: Loading compiled-in X.509 certificates Feb 9 19:14:36.979630 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 19:14:36.979646 kernel: Key type .fscrypt registered Feb 9 19:14:36.979662 kernel: Key type fscrypt-provisioning registered Feb 9 19:14:36.979678 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:14:36.979694 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:14:36.979711 kernel: ima: No architecture policies found Feb 9 19:14:36.979727 kernel: Freeing unused kernel memory: 34688K Feb 9 19:14:36.979743 kernel: Run /init as init process Feb 9 19:14:36.980879 kernel: with arguments: Feb 9 19:14:36.980903 kernel: /init Feb 9 19:14:36.980919 kernel: with environment: Feb 9 19:14:36.980935 kernel: HOME=/ Feb 9 19:14:36.980951 kernel: TERM=linux Feb 9 19:14:36.980968 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:14:36.980989 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:14:36.981011 systemd[1]: Detected virtualization amazon. Feb 9 19:14:36.981033 systemd[1]: Detected architecture arm64. Feb 9 19:14:36.981051 systemd[1]: Running in initrd. Feb 9 19:14:36.981068 systemd[1]: No hostname configured, using default hostname. Feb 9 19:14:36.981085 systemd[1]: Hostname set to . Feb 9 19:14:36.981103 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:14:36.981121 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:14:36.981138 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:14:36.981155 systemd[1]: Reached target cryptsetup.target. Feb 9 19:14:36.981177 systemd[1]: Reached target paths.target. Feb 9 19:14:36.981195 systemd[1]: Reached target slices.target. Feb 9 19:14:36.981213 systemd[1]: Reached target swap.target. Feb 9 19:14:36.981230 systemd[1]: Reached target timers.target. Feb 9 19:14:36.981248 systemd[1]: Listening on iscsid.socket. Feb 9 19:14:36.981266 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:14:36.981284 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:14:36.981301 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:14:36.981323 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:14:36.981341 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:14:36.981358 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:14:36.981376 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:14:36.981394 systemd[1]: Reached target sockets.target. Feb 9 19:14:36.981411 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:14:36.981429 systemd[1]: Finished network-cleanup.service. Feb 9 19:14:36.981447 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:14:36.981464 systemd[1]: Starting systemd-journald.service... Feb 9 19:14:36.981486 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:14:36.981503 systemd[1]: Starting systemd-resolved.service... Feb 9 19:14:36.981521 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:14:36.981538 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:14:36.981556 kernel: audit: type=1130 audit(1707506076.967:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.981574 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:14:36.981595 systemd-journald[308]: Journal started Feb 9 19:14:36.981689 systemd-journald[308]: Runtime Journal (/run/log/journal/ec299702e12c4d64a11ed3b2a45329ad) is 8.0M, max 75.4M, 67.4M free. Feb 9 19:14:36.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.952804 systemd-modules-load[309]: Inserted module 'overlay' Feb 9 19:14:36.996337 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:14:37.001518 systemd-modules-load[309]: Inserted module 'br_netfilter' Feb 9 19:14:37.005714 kernel: Bridge firewalling registered Feb 9 19:14:37.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.014769 kernel: audit: type=1130 audit(1707506077.004:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.014801 systemd[1]: Started systemd-journald.service. Feb 9 19:14:37.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.024578 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:14:37.038969 kernel: audit: type=1130 audit(1707506077.022:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.049778 kernel: audit: type=1130 audit(1707506077.037:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.049851 kernel: SCSI subsystem initialized Feb 9 19:14:37.064952 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:14:37.065018 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:14:37.063303 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:14:37.069898 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:14:37.079780 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:14:37.095513 systemd-resolved[310]: Positive Trust Anchors: Feb 9 19:14:37.095540 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:14:37.095594 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:14:37.110265 systemd-modules-load[309]: Inserted module 'dm_multipath' Feb 9 19:14:37.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.110571 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:14:37.124930 kernel: audit: type=1130 audit(1707506077.111:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.121458 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:14:37.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.127052 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:14:37.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.145215 kernel: audit: type=1130 audit(1707506077.125:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.145272 kernel: audit: type=1130 audit(1707506077.136:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.147007 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:14:37.151772 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:14:37.172049 dracut-cmdline[328]: dracut-dracut-053 Feb 9 19:14:37.177997 dracut-cmdline[328]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 19:14:37.200316 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:14:37.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.211838 kernel: audit: type=1130 audit(1707506077.200:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.314792 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:14:37.328794 kernel: iscsi: registered transport (tcp) Feb 9 19:14:37.353145 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:14:37.353226 kernel: QLogic iSCSI HBA Driver Feb 9 19:14:37.553682 systemd-resolved[310]: Defaulting to hostname 'linux'. Feb 9 19:14:37.555532 kernel: random: crng init done Feb 9 19:14:37.557335 systemd[1]: Started systemd-resolved.service. Feb 9 19:14:37.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.559266 systemd[1]: Reached target nss-lookup.target. Feb 9 19:14:37.583162 kernel: audit: type=1130 audit(1707506077.557:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.583370 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:14:37.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.587951 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:14:37.652799 kernel: raid6: neonx8 gen() 6430 MB/s Feb 9 19:14:37.670783 kernel: raid6: neonx8 xor() 4726 MB/s Feb 9 19:14:37.688791 kernel: raid6: neonx4 gen() 6556 MB/s Feb 9 19:14:37.706783 kernel: raid6: neonx4 xor() 4903 MB/s Feb 9 19:14:37.724782 kernel: raid6: neonx2 gen() 5803 MB/s Feb 9 19:14:37.742786 kernel: raid6: neonx2 xor() 4506 MB/s Feb 9 19:14:37.760783 kernel: raid6: neonx1 gen() 4500 MB/s Feb 9 19:14:37.778786 kernel: raid6: neonx1 xor() 3676 MB/s Feb 9 19:14:37.796785 kernel: raid6: int64x8 gen() 3426 MB/s Feb 9 19:14:37.814782 kernel: raid6: int64x8 xor() 2093 MB/s Feb 9 19:14:37.832782 kernel: raid6: int64x4 gen() 3849 MB/s Feb 9 19:14:37.850786 kernel: raid6: int64x4 xor() 2197 MB/s Feb 9 19:14:37.868783 kernel: raid6: int64x2 gen() 3615 MB/s Feb 9 19:14:37.886792 kernel: raid6: int64x2 xor() 1942 MB/s Feb 9 19:14:37.904783 kernel: raid6: int64x1 gen() 2770 MB/s Feb 9 19:14:37.924241 kernel: raid6: int64x1 xor() 1455 MB/s Feb 9 19:14:37.924271 kernel: raid6: using algorithm neonx4 gen() 6556 MB/s Feb 9 19:14:37.924295 kernel: raid6: .... xor() 4903 MB/s, rmw enabled Feb 9 19:14:37.926031 kernel: raid6: using neon recovery algorithm Feb 9 19:14:37.944789 kernel: xor: measuring software checksum speed Feb 9 19:14:37.946782 kernel: 8regs : 9343 MB/sec Feb 9 19:14:37.949783 kernel: 32regs : 11107 MB/sec Feb 9 19:14:37.953813 kernel: arm64_neon : 9377 MB/sec Feb 9 19:14:37.953845 kernel: xor: using function: 32regs (11107 MB/sec) Feb 9 19:14:38.043807 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 19:14:38.061012 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:14:38.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:38.063000 audit: BPF prog-id=7 op=LOAD Feb 9 19:14:38.063000 audit: BPF prog-id=8 op=LOAD Feb 9 19:14:38.065679 systemd[1]: Starting systemd-udevd.service... Feb 9 19:14:38.099506 systemd-udevd[508]: Using default interface naming scheme 'v252'. Feb 9 19:14:38.110407 systemd[1]: Started systemd-udevd.service. Feb 9 19:14:38.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:38.117738 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:14:38.146969 dracut-pre-trigger[517]: rd.md=0: removing MD RAID activation Feb 9 19:14:38.206824 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:14:38.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:38.213733 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:14:38.317628 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:14:38.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:38.445291 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 19:14:38.445363 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 9 19:14:38.456398 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 9 19:14:38.456474 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 19:14:38.464027 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 19:14:38.464388 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 19:14:38.472233 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 19:14:38.472699 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:87:5a:e1:71:57 Feb 9 19:14:38.479685 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:14:38.479773 kernel: GPT:9289727 != 16777215 Feb 9 19:14:38.479802 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:14:38.483177 kernel: GPT:9289727 != 16777215 Feb 9 19:14:38.483232 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:14:38.486557 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:38.490598 (udev-worker)[563]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:14:38.548797 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (567) Feb 9 19:14:38.575840 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:14:38.636419 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:14:38.667040 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:14:38.672007 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:14:38.686071 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:14:38.719645 systemd[1]: Starting disk-uuid.service... Feb 9 19:14:38.736794 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:38.739862 disk-uuid[671]: Primary Header is updated. Feb 9 19:14:38.739862 disk-uuid[671]: Secondary Entries is updated. Feb 9 19:14:38.739862 disk-uuid[671]: Secondary Header is updated. Feb 9 19:14:38.758797 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:39.771879 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:39.772650 disk-uuid[672]: The operation has completed successfully. Feb 9 19:14:39.941065 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:14:39.941673 systemd[1]: Finished disk-uuid.service. Feb 9 19:14:39.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.967618 systemd[1]: Starting verity-setup.service... Feb 9 19:14:40.003800 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 19:14:40.089322 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:14:40.095593 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:14:40.101531 systemd[1]: Finished verity-setup.service. Feb 9 19:14:40.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.182800 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:14:40.183586 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:14:40.187337 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:14:40.191198 systemd[1]: Starting ignition-setup.service... Feb 9 19:14:40.194025 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:14:40.223337 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:14:40.223406 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:14:40.225590 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:14:40.232789 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:14:40.249221 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:14:40.281195 systemd[1]: Finished ignition-setup.service. Feb 9 19:14:40.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.285650 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:14:40.347226 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:14:40.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.350000 audit: BPF prog-id=9 op=LOAD Feb 9 19:14:40.352799 systemd[1]: Starting systemd-networkd.service... Feb 9 19:14:40.398450 systemd-networkd[1112]: lo: Link UP Feb 9 19:14:40.398971 systemd-networkd[1112]: lo: Gained carrier Feb 9 19:14:40.399974 systemd-networkd[1112]: Enumeration completed Feb 9 19:14:40.400408 systemd-networkd[1112]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:14:40.405351 systemd[1]: Started systemd-networkd.service. Feb 9 19:14:40.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.410663 systemd[1]: Reached target network.target. Feb 9 19:14:40.428413 systemd-networkd[1112]: eth0: Link UP Feb 9 19:14:40.428422 systemd-networkd[1112]: eth0: Gained carrier Feb 9 19:14:40.432406 systemd[1]: Starting iscsiuio.service... Feb 9 19:14:40.440488 systemd[1]: Started iscsiuio.service. Feb 9 19:14:40.443553 systemd-networkd[1112]: eth0: DHCPv4 address 172.31.18.155/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:14:40.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.464003 systemd[1]: Starting iscsid.service... Feb 9 19:14:40.471571 iscsid[1117]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:14:40.471571 iscsid[1117]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:14:40.471571 iscsid[1117]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:14:40.471571 iscsid[1117]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:14:40.471571 iscsid[1117]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:14:40.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.508390 iscsid[1117]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:14:40.485476 systemd[1]: Started iscsid.service. Feb 9 19:14:40.489068 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:14:40.524568 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:14:40.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.527909 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:14:40.528706 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:14:40.529570 systemd[1]: Reached target remote-fs.target. Feb 9 19:14:40.531191 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:14:40.554645 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:14:40.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.829610 ignition[1062]: Ignition 2.14.0 Feb 9 19:14:40.829636 ignition[1062]: Stage: fetch-offline Feb 9 19:14:40.829968 ignition[1062]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:40.830032 ignition[1062]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:40.848494 ignition[1062]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:40.849495 ignition[1062]: Ignition finished successfully Feb 9 19:14:40.853589 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:14:40.866484 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:14:40.866526 kernel: audit: type=1130 audit(1707506080.854:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.857427 systemd[1]: Starting ignition-fetch.service... Feb 9 19:14:40.876173 ignition[1136]: Ignition 2.14.0 Feb 9 19:14:40.876201 ignition[1136]: Stage: fetch Feb 9 19:14:40.876478 ignition[1136]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:40.876544 ignition[1136]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:40.891271 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:40.893618 ignition[1136]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:40.912283 ignition[1136]: INFO : PUT result: OK Feb 9 19:14:40.915542 ignition[1136]: DEBUG : parsed url from cmdline: "" Feb 9 19:14:40.915542 ignition[1136]: INFO : no config URL provided Feb 9 19:14:40.915542 ignition[1136]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:14:40.921486 ignition[1136]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 19:14:40.921486 ignition[1136]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:40.921486 ignition[1136]: INFO : PUT result: OK Feb 9 19:14:40.921486 ignition[1136]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 19:14:40.931792 ignition[1136]: INFO : GET result: OK Feb 9 19:14:40.931792 ignition[1136]: DEBUG : parsing config with SHA512: 805ab9a195dcead1ab1ed066b85f7aea51d00b8c2afe663b698f59d47636afbd6f560b23c243596d37c85466be8e7665245716ff80ec998efc69b3c14643c5f7 Feb 9 19:14:40.997265 unknown[1136]: fetched base config from "system" Feb 9 19:14:40.999367 unknown[1136]: fetched base config from "system" Feb 9 19:14:41.000213 unknown[1136]: fetched user config from "aws" Feb 9 19:14:41.001871 ignition[1136]: fetch: fetch complete Feb 9 19:14:41.001885 ignition[1136]: fetch: fetch passed Feb 9 19:14:41.001972 ignition[1136]: Ignition finished successfully Feb 9 19:14:41.007543 systemd[1]: Finished ignition-fetch.service. Feb 9 19:14:41.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.012910 systemd[1]: Starting ignition-kargs.service... Feb 9 19:14:41.027776 kernel: audit: type=1130 audit(1707506081.009:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.031341 ignition[1142]: Ignition 2.14.0 Feb 9 19:14:41.031367 ignition[1142]: Stage: kargs Feb 9 19:14:41.031666 ignition[1142]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:41.031725 ignition[1142]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:41.045312 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:41.047612 ignition[1142]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:41.050449 ignition[1142]: INFO : PUT result: OK Feb 9 19:14:41.055919 ignition[1142]: kargs: kargs passed Feb 9 19:14:41.056037 ignition[1142]: Ignition finished successfully Feb 9 19:14:41.060341 systemd[1]: Finished ignition-kargs.service. Feb 9 19:14:41.063979 systemd[1]: Starting ignition-disks.service... Feb 9 19:14:41.078277 ignition[1148]: Ignition 2.14.0 Feb 9 19:14:41.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.078294 ignition[1148]: Stage: disks Feb 9 19:14:41.078582 ignition[1148]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:41.078634 ignition[1148]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:41.093774 kernel: audit: type=1130 audit(1707506081.060:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.098397 ignition[1148]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:41.100619 ignition[1148]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:41.103631 ignition[1148]: INFO : PUT result: OK Feb 9 19:14:41.108960 ignition[1148]: disks: disks passed Feb 9 19:14:41.109061 ignition[1148]: Ignition finished successfully Feb 9 19:14:41.112645 systemd[1]: Finished ignition-disks.service. Feb 9 19:14:41.124869 kernel: audit: type=1130 audit(1707506081.112:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.114902 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:14:41.126577 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:14:41.129652 systemd[1]: Reached target local-fs.target. Feb 9 19:14:41.131238 systemd[1]: Reached target sysinit.target. Feb 9 19:14:41.134133 systemd[1]: Reached target basic.target. Feb 9 19:14:41.137077 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:14:41.164026 systemd-fsck[1156]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 19:14:41.175099 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:14:41.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.179696 systemd[1]: Mounting sysroot.mount... Feb 9 19:14:41.187100 kernel: audit: type=1130 audit(1707506081.177:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.197787 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:14:41.199434 systemd[1]: Mounted sysroot.mount. Feb 9 19:14:41.202536 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:14:41.218283 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:14:41.222194 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:14:41.222682 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:14:41.226919 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:14:41.237518 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:14:41.253136 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:14:41.258062 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:14:41.269866 initrd-setup-root[1178]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:14:41.279804 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1173) Feb 9 19:14:41.287951 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:14:41.288003 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:14:41.292087 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:14:41.292156 initrd-setup-root[1186]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:14:41.300442 initrd-setup-root[1210]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:14:41.306787 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:14:41.312850 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:14:41.317705 initrd-setup-root[1220]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:14:41.503327 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:14:41.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.507827 systemd[1]: Starting ignition-mount.service... Feb 9 19:14:41.522690 kernel: audit: type=1130 audit(1707506081.503:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.519458 systemd-networkd[1112]: eth0: Gained IPv6LL Feb 9 19:14:41.520372 systemd[1]: Starting sysroot-boot.service... Feb 9 19:14:41.536888 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:14:41.537073 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:14:41.567676 ignition[1239]: INFO : Ignition 2.14.0 Feb 9 19:14:41.567676 ignition[1239]: INFO : Stage: mount Feb 9 19:14:41.570976 ignition[1239]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:41.570976 ignition[1239]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:41.580201 systemd[1]: Finished sysroot-boot.service. Feb 9 19:14:41.591140 kernel: audit: type=1130 audit(1707506081.580:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.597980 ignition[1239]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:41.600523 ignition[1239]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:41.604416 ignition[1239]: INFO : PUT result: OK Feb 9 19:14:41.609879 ignition[1239]: INFO : mount: mount passed Feb 9 19:14:41.611910 ignition[1239]: INFO : Ignition finished successfully Feb 9 19:14:41.614035 systemd[1]: Finished ignition-mount.service. Feb 9 19:14:41.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.618842 systemd[1]: Starting ignition-files.service... Feb 9 19:14:41.629649 kernel: audit: type=1130 audit(1707506081.616:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.634694 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:14:41.653788 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1248) Feb 9 19:14:41.660066 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:14:41.660118 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:14:41.662253 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:14:41.668775 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:14:41.673576 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:14:41.691566 ignition[1267]: INFO : Ignition 2.14.0 Feb 9 19:14:41.691566 ignition[1267]: INFO : Stage: files Feb 9 19:14:41.698055 ignition[1267]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:41.698055 ignition[1267]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:41.709846 ignition[1267]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:41.712545 ignition[1267]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:41.715737 ignition[1267]: INFO : PUT result: OK Feb 9 19:14:41.721190 ignition[1267]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:14:41.724784 ignition[1267]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:14:41.727597 ignition[1267]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:14:41.753251 ignition[1267]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:14:41.756021 ignition[1267]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:14:41.759240 unknown[1267]: wrote ssh authorized keys file for user: core Feb 9 19:14:41.762971 ignition[1267]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:14:41.762971 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:14:41.762971 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:14:41.762971 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 19:14:41.762971 ignition[1267]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 19:14:41.821386 ignition[1267]: INFO : GET result: OK Feb 9 19:14:41.926927 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 19:14:41.931093 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 19:14:41.931093 ignition[1267]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 19:14:42.395145 ignition[1267]: INFO : GET result: OK Feb 9 19:14:42.702791 ignition[1267]: DEBUG : file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 19:14:42.707765 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 19:14:42.707765 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 19:14:42.707765 ignition[1267]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 19:14:43.134838 ignition[1267]: INFO : GET result: OK Feb 9 19:14:43.517604 ignition[1267]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 19:14:43.522269 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 19:14:43.522269 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:14:43.522269 ignition[1267]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 19:14:43.640712 ignition[1267]: INFO : GET result: OK Feb 9 19:14:45.098603 ignition[1267]: DEBUG : file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 19:14:45.103480 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:14:45.103480 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:14:45.103480 ignition[1267]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:45.124242 ignition[1267]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem529352674" Feb 9 19:14:45.124242 ignition[1267]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem529352674": device or resource busy Feb 9 19:14:45.124242 ignition[1267]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem529352674", trying btrfs: device or resource busy Feb 9 19:14:45.124242 ignition[1267]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem529352674" Feb 9 19:14:45.136344 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1272) Feb 9 19:14:45.136386 ignition[1267]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem529352674" Feb 9 19:14:45.140554 ignition[1267]: INFO : op(3): [started] unmounting "/mnt/oem529352674" Feb 9 19:14:45.140554 ignition[1267]: INFO : op(3): [finished] unmounting "/mnt/oem529352674" Feb 9 19:14:45.140554 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:14:45.140554 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:14:45.140554 ignition[1267]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 19:14:45.160254 systemd[1]: mnt-oem529352674.mount: Deactivated successfully. Feb 9 19:14:45.205002 ignition[1267]: INFO : GET result: OK Feb 9 19:14:45.733795 ignition[1267]: DEBUG : file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 19:14:45.738816 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:14:45.738816 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:14:45.738816 ignition[1267]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 19:14:45.803431 ignition[1267]: INFO : GET result: OK Feb 9 19:14:46.400475 ignition[1267]: DEBUG : file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 19:14:46.408119 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:14:46.408119 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:14:46.408119 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:14:46.408119 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:14:46.408119 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:14:46.408119 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:14:46.408119 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:14:46.408119 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:14:46.408119 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:14:46.408119 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:14:46.408119 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:14:46.408119 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:14:46.408119 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:14:46.408119 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:14:46.408119 ignition[1267]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:46.469187 ignition[1267]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2949000542" Feb 9 19:14:46.469187 ignition[1267]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2949000542": device or resource busy Feb 9 19:14:46.469187 ignition[1267]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2949000542", trying btrfs: device or resource busy Feb 9 19:14:46.469187 ignition[1267]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2949000542" Feb 9 19:14:46.469187 ignition[1267]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2949000542" Feb 9 19:14:46.469187 ignition[1267]: INFO : op(6): [started] unmounting "/mnt/oem2949000542" Feb 9 19:14:46.469187 ignition[1267]: INFO : op(6): [finished] unmounting "/mnt/oem2949000542" Feb 9 19:14:46.469187 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:14:46.469187 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:14:46.469187 ignition[1267]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:46.442617 systemd[1]: mnt-oem2949000542.mount: Deactivated successfully. Feb 9 19:14:46.503910 ignition[1267]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem965998887" Feb 9 19:14:46.503910 ignition[1267]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem965998887": device or resource busy Feb 9 19:14:46.503910 ignition[1267]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem965998887", trying btrfs: device or resource busy Feb 9 19:14:46.503910 ignition[1267]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem965998887" Feb 9 19:14:46.518483 ignition[1267]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem965998887" Feb 9 19:14:46.518483 ignition[1267]: INFO : op(9): [started] unmounting "/mnt/oem965998887" Feb 9 19:14:46.521107 systemd[1]: mnt-oem965998887.mount: Deactivated successfully. Feb 9 19:14:46.532825 ignition[1267]: INFO : op(9): [finished] unmounting "/mnt/oem965998887" Feb 9 19:14:46.535056 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:14:46.535056 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:14:46.542164 ignition[1267]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:46.551723 ignition[1267]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3916976698" Feb 9 19:14:46.554590 ignition[1267]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3916976698": device or resource busy Feb 9 19:14:46.554590 ignition[1267]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3916976698", trying btrfs: device or resource busy Feb 9 19:14:46.554590 ignition[1267]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3916976698" Feb 9 19:14:46.566627 ignition[1267]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3916976698" Feb 9 19:14:46.566627 ignition[1267]: INFO : op(c): [started] unmounting "/mnt/oem3916976698" Feb 9 19:14:46.577431 ignition[1267]: INFO : op(c): [finished] unmounting "/mnt/oem3916976698" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(14): [started] processing unit "amazon-ssm-agent.service" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(14): op(15): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(14): op(15): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(14): [finished] processing unit "amazon-ssm-agent.service" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(16): [started] processing unit "nvidia.service" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(16): [finished] processing unit "nvidia.service" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(17): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(17): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(18): [started] processing unit "containerd.service" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(18): op(19): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(18): op(19): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(18): [finished] processing unit "containerd.service" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Feb 9 19:14:46.579559 ignition[1267]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(20): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(21): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(21): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(22): [started] setting preset to enabled for "nvidia.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(22): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(23): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(23): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(24): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(25): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: op(25): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: createResultFile: createFiles: op(26): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: createResultFile: createFiles: op(26): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:14:46.643074 ignition[1267]: INFO : files: files passed Feb 9 19:14:46.713293 kernel: audit: type=1130 audit(1707506086.696:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.695639 systemd[1]: Finished ignition-files.service. Feb 9 19:14:46.714894 ignition[1267]: INFO : Ignition finished successfully Feb 9 19:14:46.705393 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:14:46.721103 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:14:46.726096 systemd[1]: Starting ignition-quench.service... Feb 9 19:14:46.731925 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:14:46.733875 systemd[1]: Finished ignition-quench.service. Feb 9 19:14:46.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.753346 kernel: audit: type=1130 audit(1707506086.737:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.753715 kernel: audit: type=1131 audit(1707506086.737:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.756429 initrd-setup-root-after-ignition[1293]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:14:46.760733 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:14:46.772230 kernel: audit: type=1130 audit(1707506086.761:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.772304 systemd[1]: Reached target ignition-complete.target. Feb 9 19:14:46.777104 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:14:46.804433 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:14:46.804836 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:14:46.824143 kernel: audit: type=1130 audit(1707506086.807:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.824796 kernel: audit: type=1131 audit(1707506086.807:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.808404 systemd[1]: Reached target initrd-fs.target. Feb 9 19:14:46.825419 systemd[1]: Reached target initrd.target. Feb 9 19:14:46.827059 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:14:46.829410 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:14:46.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.853687 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:14:46.866376 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:14:46.877832 kernel: audit: type=1130 audit(1707506086.854:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.887923 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:14:46.962080 kernel: audit: type=1131 audit(1707506086.886:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.962119 kernel: audit: type=1131 audit(1707506086.892:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.962145 kernel: audit: type=1131 audit(1707506086.892:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.888992 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:14:46.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.889296 systemd[1]: Stopped target timers.target. Feb 9 19:14:46.968925 ignition[1306]: INFO : Ignition 2.14.0 Feb 9 19:14:46.968925 ignition[1306]: INFO : Stage: umount Feb 9 19:14:46.968925 ignition[1306]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:46.968925 ignition[1306]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:46.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.889571 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:14:47.023000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:14:47.024625 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:47.024625 ignition[1306]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:47.024625 ignition[1306]: INFO : PUT result: OK Feb 9 19:14:47.024625 ignition[1306]: INFO : umount: umount passed Feb 9 19:14:47.024625 ignition[1306]: INFO : Ignition finished successfully Feb 9 19:14:47.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.890450 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:14:47.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.891618 systemd[1]: Stopped target initrd.target. Feb 9 19:14:46.892712 systemd[1]: Stopped target basic.target. Feb 9 19:14:47.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.893260 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:14:46.893594 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:14:46.893940 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:14:46.894228 systemd[1]: Stopped target remote-fs.target. Feb 9 19:14:47.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.894548 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:14:46.894908 systemd[1]: Stopped target sysinit.target. Feb 9 19:14:46.895221 systemd[1]: Stopped target local-fs.target. Feb 9 19:14:47.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.895530 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:14:46.895879 systemd[1]: Stopped target swap.target. Feb 9 19:14:46.896127 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:14:47.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.896327 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:14:46.896968 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:14:46.897415 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:14:46.897603 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:14:46.897918 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:14:46.898119 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:14:46.898496 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:14:46.898679 systemd[1]: Stopped ignition-files.service. Feb 9 19:14:46.900872 systemd[1]: Stopping ignition-mount.service... Feb 9 19:14:46.935095 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:14:46.941933 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:14:46.942233 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:14:46.942643 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:14:46.942923 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:14:46.962464 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:14:46.962996 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:14:46.987658 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:14:46.988352 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:14:46.993148 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:14:46.993327 systemd[1]: Stopped ignition-mount.service. Feb 9 19:14:46.993581 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:14:46.993664 systemd[1]: Stopped ignition-disks.service. Feb 9 19:14:46.993842 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:14:46.993915 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:14:46.994131 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:14:46.994202 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:14:46.994460 systemd[1]: Stopped target network.target. Feb 9 19:14:46.994734 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:14:46.994824 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:14:46.995119 systemd[1]: Stopped target paths.target. Feb 9 19:14:46.995420 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:14:46.998827 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:14:46.999026 systemd[1]: Stopped target slices.target. Feb 9 19:14:46.999354 systemd[1]: Stopped target sockets.target. Feb 9 19:14:46.999726 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:14:46.999799 systemd[1]: Closed iscsid.socket. Feb 9 19:14:47.000068 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:14:47.000123 systemd[1]: Closed iscsiuio.socket. Feb 9 19:14:47.000366 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:14:47.000445 systemd[1]: Stopped ignition-setup.service. Feb 9 19:14:47.000726 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:14:47.000820 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:14:47.001223 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:14:47.001775 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:14:47.015879 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:14:47.016078 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:14:47.031957 systemd-networkd[1112]: eth0: DHCPv6 lease lost Feb 9 19:14:47.124000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:14:47.033727 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:14:47.033980 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:14:47.037936 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:14:47.038020 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:14:47.042276 systemd[1]: Stopping network-cleanup.service... Feb 9 19:14:47.044865 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:14:47.044997 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:14:47.046942 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:14:47.047032 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:14:47.050352 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:14:47.050439 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:14:47.053279 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:14:47.065034 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:14:47.065229 systemd[1]: Stopped network-cleanup.service. Feb 9 19:14:47.068857 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:14:47.069117 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:14:47.071628 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:14:47.071709 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:14:47.074695 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:14:47.074787 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:14:47.077313 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:14:47.077400 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:14:47.079025 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:14:47.079104 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:14:47.081144 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:14:47.081224 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:14:47.082799 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:14:47.083429 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:14:47.083529 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:14:47.087257 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:14:47.087364 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:14:47.108624 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:14:47.108727 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:14:47.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.223284 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:14:47.225515 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:14:47.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.229157 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:14:47.233973 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:14:47.252000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:14:47.252000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:14:47.254117 systemd[1]: Switching root. Feb 9 19:14:47.255000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:14:47.255000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:14:47.255000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:14:47.289881 iscsid[1117]: iscsid shutting down. Feb 9 19:14:47.291431 systemd-journald[308]: Received SIGTERM from PID 1 (n/a). Feb 9 19:14:47.291542 systemd-journald[308]: Journal stopped Feb 9 19:14:52.601575 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:14:52.601681 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:14:52.601715 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:14:52.601769 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:14:52.601815 kernel: SELinux: policy capability open_perms=1 Feb 9 19:14:52.601847 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:14:52.601879 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:14:52.601911 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:14:52.601949 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:14:52.601987 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:14:52.602017 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:14:52.602051 systemd[1]: Successfully loaded SELinux policy in 118.210ms. Feb 9 19:14:52.602100 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.641ms. Feb 9 19:14:52.602134 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:14:52.602165 systemd[1]: Detected virtualization amazon. Feb 9 19:14:52.602194 systemd[1]: Detected architecture arm64. Feb 9 19:14:52.602226 systemd[1]: Detected first boot. Feb 9 19:14:52.602260 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:14:52.602293 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:14:52.602325 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:14:52.602356 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:14:52.602391 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:14:52.602431 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:14:52.602466 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:14:52.602503 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:14:52.602533 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:14:52.602565 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:14:52.602597 systemd[1]: Created slice system-getty.slice. Feb 9 19:14:52.602626 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:14:52.602659 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:14:52.602691 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:14:52.602723 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:14:52.602813 systemd[1]: Created slice user.slice. Feb 9 19:14:52.602855 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:14:52.602887 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:14:52.602919 systemd[1]: Set up automount boot.automount. Feb 9 19:14:52.602949 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:14:52.602982 systemd[1]: Reached target integritysetup.target. Feb 9 19:14:52.603015 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:14:52.603048 systemd[1]: Reached target remote-fs.target. Feb 9 19:14:52.603080 systemd[1]: Reached target slices.target. Feb 9 19:14:52.603115 systemd[1]: Reached target swap.target. Feb 9 19:14:52.603149 systemd[1]: Reached target torcx.target. Feb 9 19:14:52.603181 systemd[1]: Reached target veritysetup.target. Feb 9 19:14:52.603226 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:14:52.603263 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:14:52.603293 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:14:52.603322 kernel: kauditd_printk_skb: 47 callbacks suppressed Feb 9 19:14:52.603354 kernel: audit: type=1400 audit(1707506092.247:87): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:14:52.603385 kernel: audit: type=1335 audit(1707506092.247:88): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:14:52.603418 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:14:52.603450 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:14:52.603481 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:14:52.603510 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:14:52.603541 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:14:52.603571 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:14:52.603602 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:14:52.603634 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:14:52.603664 systemd[1]: Mounting media.mount... Feb 9 19:14:52.603699 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:14:52.603729 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:14:52.603775 systemd[1]: Mounting tmp.mount... Feb 9 19:14:52.603809 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:14:52.603853 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:14:52.603883 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:14:52.603912 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:14:52.603942 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:14:52.603971 systemd[1]: Starting modprobe@drm.service... Feb 9 19:14:52.604007 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:14:52.604040 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:14:52.604069 systemd[1]: Starting modprobe@loop.service... Feb 9 19:14:52.604102 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:14:52.604132 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:14:52.604164 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:14:52.604193 systemd[1]: Starting systemd-journald.service... Feb 9 19:14:52.604224 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:14:52.604258 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:14:52.604293 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:14:52.604323 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:14:52.604353 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:14:52.604382 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:14:52.604411 systemd[1]: Mounted media.mount. Feb 9 19:14:52.604440 kernel: loop: module loaded Feb 9 19:14:52.604470 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:14:52.604502 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:14:52.604530 kernel: fuse: init (API version 7.34) Feb 9 19:14:52.604565 systemd[1]: Mounted tmp.mount. Feb 9 19:14:52.604595 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:14:52.604625 kernel: audit: type=1130 audit(1707506092.526:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.604655 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:14:52.604686 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:14:52.604718 kernel: audit: type=1130 audit(1707506092.556:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.604762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:14:52.604799 kernel: audit: type=1131 audit(1707506092.556:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.604833 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:14:52.604865 kernel: audit: type=1130 audit(1707506092.580:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.604894 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:14:52.604927 systemd[1]: Finished modprobe@drm.service. Feb 9 19:14:52.604967 kernel: audit: type=1131 audit(1707506092.580:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.605001 systemd-journald[1459]: Journal started Feb 9 19:14:52.605095 systemd-journald[1459]: Runtime Journal (/run/log/journal/ec299702e12c4d64a11ed3b2a45329ad) is 8.0M, max 75.4M, 67.4M free. Feb 9 19:14:52.247000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:14:52.247000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:14:52.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.618223 systemd[1]: Started systemd-journald.service. Feb 9 19:14:52.614146 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:14:52.655976 kernel: audit: type=1305 audit(1707506092.598:94): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:14:52.656027 kernel: audit: type=1300 audit(1707506092.598:94): arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffda20cc70 a2=4000 a3=1 items=0 ppid=1 pid=1459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:14:52.656068 kernel: audit: type=1327 audit(1707506092.598:94): proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:14:52.598000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:14:52.598000 audit[1459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffda20cc70 a2=4000 a3=1 items=0 ppid=1 pid=1459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:14:52.598000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:14:52.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.614515 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:14:52.617010 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:14:52.617364 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:14:52.619911 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:14:52.624975 systemd[1]: Finished modprobe@loop.service. Feb 9 19:14:52.653531 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:14:52.657465 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:14:52.660388 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:14:52.662705 systemd[1]: Reached target network-pre.target. Feb 9 19:14:52.667040 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:14:52.674340 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:14:52.675935 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:14:52.681228 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:14:52.685319 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:14:52.687466 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:14:52.694954 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:14:52.696774 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:14:52.699424 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:14:52.704584 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:14:52.708272 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:14:52.743409 systemd-journald[1459]: Time spent on flushing to /var/log/journal/ec299702e12c4d64a11ed3b2a45329ad is 92.341ms for 1104 entries. Feb 9 19:14:52.743409 systemd-journald[1459]: System Journal (/var/log/journal/ec299702e12c4d64a11ed3b2a45329ad) is 8.0M, max 195.6M, 187.6M free. Feb 9 19:14:52.855020 systemd-journald[1459]: Received client request to flush runtime journal. Feb 9 19:14:52.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.755675 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:14:52.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.772535 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:14:52.778613 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:14:52.780554 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:14:52.785538 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:14:52.858009 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:14:52.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.866575 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:14:52.871254 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:14:52.889633 udevadm[1520]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:14:52.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.890491 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:14:52.895435 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:14:52.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.961279 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:14:53.662273 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:14:53.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.666516 systemd[1]: Starting systemd-udevd.service... Feb 9 19:14:53.706352 systemd-udevd[1525]: Using default interface naming scheme 'v252'. Feb 9 19:14:53.759271 systemd[1]: Started systemd-udevd.service. Feb 9 19:14:53.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.764234 systemd[1]: Starting systemd-networkd.service... Feb 9 19:14:53.777434 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:14:53.848635 (udev-worker)[1532]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:14:53.860795 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:14:53.892876 systemd[1]: Started systemd-userdbd.service. Feb 9 19:14:53.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.053887 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1548) Feb 9 19:14:54.063821 systemd-networkd[1533]: lo: Link UP Feb 9 19:14:54.063843 systemd-networkd[1533]: lo: Gained carrier Feb 9 19:14:54.064776 systemd-networkd[1533]: Enumeration completed Feb 9 19:14:54.064973 systemd[1]: Started systemd-networkd.service. Feb 9 19:14:54.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.069472 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:14:54.071478 systemd-networkd[1533]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:14:54.076797 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:14:54.077099 systemd-networkd[1533]: eth0: Link UP Feb 9 19:14:54.077401 systemd-networkd[1533]: eth0: Gained carrier Feb 9 19:14:54.099186 systemd-networkd[1533]: eth0: DHCPv4 address 172.31.18.155/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:14:54.262840 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 19:14:54.263669 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:14:54.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.268155 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:14:54.320668 lvm[1646]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:14:54.358472 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:14:54.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.360983 systemd[1]: Reached target cryptsetup.target. Feb 9 19:14:54.365524 systemd[1]: Starting lvm2-activation.service... Feb 9 19:14:54.375590 lvm[1648]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:14:54.412521 systemd[1]: Finished lvm2-activation.service. Feb 9 19:14:54.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.415439 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:14:54.417945 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:14:54.418177 systemd[1]: Reached target local-fs.target. Feb 9 19:14:54.421103 systemd[1]: Reached target machines.target. Feb 9 19:14:54.425312 systemd[1]: Starting ldconfig.service... Feb 9 19:14:54.428367 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:14:54.428645 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:14:54.431518 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:14:54.436607 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:14:54.441805 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:14:54.444040 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:14:54.444618 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:14:54.450412 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:14:54.467454 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1651 (bootctl) Feb 9 19:14:54.469743 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:14:54.489507 systemd-tmpfiles[1654]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:14:54.491732 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:14:54.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.495945 systemd-tmpfiles[1654]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:14:54.500370 systemd-tmpfiles[1654]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:14:54.559809 systemd-fsck[1660]: fsck.fat 4.2 (2021-01-31) Feb 9 19:14:54.559809 systemd-fsck[1660]: /dev/nvme0n1p1: 236 files, 113719/258078 clusters Feb 9 19:14:54.562723 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:14:54.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.567647 systemd[1]: Mounting boot.mount... Feb 9 19:14:54.618226 systemd[1]: Mounted boot.mount. Feb 9 19:14:54.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.640178 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:14:54.847140 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:14:54.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.851642 systemd[1]: Starting audit-rules.service... Feb 9 19:14:54.856108 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:14:54.866662 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:14:54.874462 systemd[1]: Starting systemd-resolved.service... Feb 9 19:14:54.882236 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:14:54.889702 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:14:54.903371 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:14:54.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.906606 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:14:54.924000 audit[1690]: SYSTEM_BOOT pid=1690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.947580 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:14:54.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:55.007852 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:14:55.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:55.031000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:14:55.031000 audit[1701]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc3981a90 a2=420 a3=0 items=0 ppid=1678 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:14:55.031000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:14:55.033317 augenrules[1701]: No rules Feb 9 19:14:55.034447 systemd[1]: Finished audit-rules.service. Feb 9 19:14:55.091945 systemd-resolved[1686]: Positive Trust Anchors: Feb 9 19:14:55.092701 systemd-resolved[1686]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:14:55.092959 systemd-resolved[1686]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:14:55.113561 systemd-resolved[1686]: Defaulting to hostname 'linux'. Feb 9 19:14:55.116958 systemd[1]: Started systemd-resolved.service. Feb 9 19:14:55.118884 systemd[1]: Reached target network.target. Feb 9 19:14:55.120506 systemd[1]: Reached target nss-lookup.target. Feb 9 19:14:55.128426 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:14:55.130342 systemd[1]: Reached target time-set.target. Feb 9 19:14:55.165448 systemd-timesyncd[1687]: Contacted time server 50.18.44.198:123 (0.flatcar.pool.ntp.org). Feb 9 19:14:55.165743 systemd-timesyncd[1687]: Initial clock synchronization to Fri 2024-02-09 19:14:55.537010 UTC. Feb 9 19:14:55.187299 ldconfig[1650]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:14:55.211954 systemd[1]: Finished ldconfig.service. Feb 9 19:14:55.216680 systemd[1]: Starting systemd-update-done.service... Feb 9 19:14:55.232106 systemd[1]: Finished systemd-update-done.service. Feb 9 19:14:55.234113 systemd[1]: Reached target sysinit.target. Feb 9 19:14:55.235987 systemd[1]: Started motdgen.path. Feb 9 19:14:55.238037 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:14:55.242278 systemd[1]: Started logrotate.timer. Feb 9 19:14:55.244051 systemd[1]: Started mdadm.timer. Feb 9 19:14:55.245470 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:14:55.247222 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:14:55.247284 systemd[1]: Reached target paths.target. Feb 9 19:14:55.248841 systemd[1]: Reached target timers.target. Feb 9 19:14:55.251603 systemd[1]: Listening on dbus.socket. Feb 9 19:14:55.255490 systemd[1]: Starting docker.socket... Feb 9 19:14:55.259442 systemd[1]: Listening on sshd.socket. Feb 9 19:14:55.261547 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:14:55.262890 systemd[1]: Listening on docker.socket. Feb 9 19:14:55.264763 systemd[1]: Reached target sockets.target. Feb 9 19:14:55.266584 systemd[1]: Reached target basic.target. Feb 9 19:14:55.268655 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:14:55.268938 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:14:55.269119 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:14:55.271815 systemd[1]: Starting containerd.service... Feb 9 19:14:55.275921 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:14:55.281187 systemd[1]: Starting dbus.service... Feb 9 19:14:55.285697 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:14:55.297096 systemd[1]: Starting extend-filesystems.service... Feb 9 19:14:55.298790 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:14:55.304377 systemd[1]: Starting motdgen.service... Feb 9 19:14:55.309673 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:14:55.311470 jq[1718]: false Feb 9 19:14:55.317956 systemd[1]: Starting prepare-critools.service... Feb 9 19:14:55.331030 systemd[1]: Starting prepare-helm.service... Feb 9 19:14:55.335416 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:14:55.341477 systemd[1]: Starting sshd-keygen.service... Feb 9 19:14:55.355702 systemd[1]: Starting systemd-logind.service... Feb 9 19:14:55.357268 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:14:55.357401 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:14:55.360376 systemd[1]: Starting update-engine.service... Feb 9 19:14:55.365312 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:14:55.452016 jq[1736]: true Feb 9 19:14:55.372854 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:14:55.373418 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:14:55.453676 tar[1738]: ./ Feb 9 19:14:55.453676 tar[1738]: ./macvlan Feb 9 19:14:55.413811 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:14:55.414335 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:14:55.476047 tar[1739]: crictl Feb 9 19:14:55.479916 jq[1744]: true Feb 9 19:14:55.463450 systemd[1]: Started dbus.service. Feb 9 19:14:55.463098 dbus-daemon[1717]: [system] SELinux support is enabled Feb 9 19:14:55.480631 tar[1747]: linux-arm64/helm Feb 9 19:14:55.468386 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:14:55.468432 systemd[1]: Reached target system-config.target. Feb 9 19:14:55.470358 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:14:55.470396 systemd[1]: Reached target user-config.target. Feb 9 19:14:55.517134 dbus-daemon[1717]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1533 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 19:14:55.520170 extend-filesystems[1719]: Found nvme0n1 Feb 9 19:14:55.527414 extend-filesystems[1719]: Found nvme0n1p1 Feb 9 19:14:55.530540 dbus-daemon[1717]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:14:55.530976 extend-filesystems[1719]: Found nvme0n1p2 Feb 9 19:14:55.534113 extend-filesystems[1719]: Found nvme0n1p3 Feb 9 19:14:55.538923 extend-filesystems[1719]: Found usr Feb 9 19:14:55.541953 extend-filesystems[1719]: Found nvme0n1p4 Feb 9 19:14:55.544299 extend-filesystems[1719]: Found nvme0n1p6 Feb 9 19:14:55.546402 extend-filesystems[1719]: Found nvme0n1p7 Feb 9 19:14:55.549339 extend-filesystems[1719]: Found nvme0n1p9 Feb 9 19:14:55.553995 extend-filesystems[1719]: Checking size of /dev/nvme0n1p9 Feb 9 19:14:55.563455 systemd[1]: Starting systemd-hostnamed.service... Feb 9 19:14:55.576301 systemd[1]: Created slice system-sshd.slice. Feb 9 19:14:55.595947 bash[1778]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:14:55.597288 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:14:55.653082 extend-filesystems[1719]: Resized partition /dev/nvme0n1p9 Feb 9 19:14:55.666580 extend-filesystems[1787]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:14:55.677694 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:14:55.678326 systemd[1]: Finished motdgen.service. Feb 9 19:14:55.752788 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 19:14:55.759328 update_engine[1734]: I0209 19:14:55.758883 1734 main.cc:92] Flatcar Update Engine starting Feb 9 19:14:55.771819 systemd[1]: Started update-engine.service. Feb 9 19:14:55.773775 update_engine[1734]: I0209 19:14:55.773720 1734 update_check_scheduler.cc:74] Next update check in 7m56s Feb 9 19:14:55.777468 systemd[1]: Started locksmithd.service. Feb 9 19:14:55.779414 systemd-networkd[1533]: eth0: Gained IPv6LL Feb 9 19:14:55.783108 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:14:55.785367 systemd[1]: Reached target network-online.target. Feb 9 19:14:55.826026 systemd[1]: Started amazon-ssm-agent.service. Feb 9 19:14:55.830919 systemd[1]: Started nvidia.service. Feb 9 19:14:55.838671 dbus-daemon[1717]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 19:14:55.838947 systemd[1]: Started systemd-hostnamed.service. Feb 9 19:14:55.843535 dbus-daemon[1717]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1779 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 19:14:55.848361 systemd[1]: Starting polkit.service... Feb 9 19:14:55.883790 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 19:14:55.970712 extend-filesystems[1787]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 19:14:55.970712 extend-filesystems[1787]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 19:14:55.970712 extend-filesystems[1787]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 19:14:55.992465 extend-filesystems[1719]: Resized filesystem in /dev/nvme0n1p9 Feb 9 19:14:55.992442 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:14:55.993303 systemd[1]: Finished extend-filesystems.service. Feb 9 19:14:56.016743 polkitd[1822]: Started polkitd version 121 Feb 9 19:14:56.068662 systemd-logind[1733]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 19:14:56.071561 systemd-logind[1733]: New seat seat0. Feb 9 19:14:56.075798 systemd[1]: Started systemd-logind.service. Feb 9 19:14:56.100906 polkitd[1822]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 19:14:56.110402 polkitd[1822]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 19:14:56.150475 env[1741]: time="2024-02-09T19:14:56.132848207Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:14:56.151444 polkitd[1822]: Finished loading, compiling and executing 2 rules Feb 9 19:14:56.159723 systemd[1]: Started polkit.service. Feb 9 19:14:56.159464 dbus-daemon[1717]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 19:14:56.162708 polkitd[1822]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 19:14:56.222042 systemd-resolved[1686]: System hostname changed to 'ip-172-31-18-155'. Feb 9 19:14:56.222047 systemd-hostnamed[1779]: Hostname set to (transient) Feb 9 19:14:56.227556 tar[1738]: ./static Feb 9 19:14:56.257617 amazon-ssm-agent[1815]: 2024/02/09 19:14:56 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 19:14:56.262459 amazon-ssm-agent[1815]: Initializing new seelog logger Feb 9 19:14:56.265077 amazon-ssm-agent[1815]: New Seelog Logger Creation Complete Feb 9 19:14:56.265077 amazon-ssm-agent[1815]: 2024/02/09 19:14:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:14:56.265077 amazon-ssm-agent[1815]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:14:56.265077 amazon-ssm-agent[1815]: 2024/02/09 19:14:56 processing appconfig overrides Feb 9 19:14:56.361779 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:14:56.379503 env[1741]: time="2024-02-09T19:14:56.379415064Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:14:56.379784 env[1741]: time="2024-02-09T19:14:56.379696406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:14:56.402930 tar[1738]: ./vlan Feb 9 19:14:56.423281 env[1741]: time="2024-02-09T19:14:56.423214385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:14:56.423496 env[1741]: time="2024-02-09T19:14:56.423462512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:14:56.424171 env[1741]: time="2024-02-09T19:14:56.424125770Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:14:56.424340 env[1741]: time="2024-02-09T19:14:56.424309400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:14:56.424481 env[1741]: time="2024-02-09T19:14:56.424448916Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:14:56.424602 env[1741]: time="2024-02-09T19:14:56.424572445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:14:56.424931 env[1741]: time="2024-02-09T19:14:56.424898718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:14:56.425565 env[1741]: time="2024-02-09T19:14:56.425524240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:14:56.426128 env[1741]: time="2024-02-09T19:14:56.426084336Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:14:56.426288 env[1741]: time="2024-02-09T19:14:56.426257130Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:14:56.426523 env[1741]: time="2024-02-09T19:14:56.426489924Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:14:56.428677 env[1741]: time="2024-02-09T19:14:56.428629182Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:14:56.429583 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:14:56.431479 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:14:56.445317 env[1741]: time="2024-02-09T19:14:56.445230575Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:14:56.445622 env[1741]: time="2024-02-09T19:14:56.445586911Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:14:56.445847 env[1741]: time="2024-02-09T19:14:56.445748277Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:14:56.446202 env[1741]: time="2024-02-09T19:14:56.446033499Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:14:56.446376 env[1741]: time="2024-02-09T19:14:56.446345745Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:14:56.446532 env[1741]: time="2024-02-09T19:14:56.446502853Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:14:56.446696 env[1741]: time="2024-02-09T19:14:56.446666542Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:14:56.447560 env[1741]: time="2024-02-09T19:14:56.447495548Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:14:56.447808 env[1741]: time="2024-02-09T19:14:56.447742017Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:14:56.447970 env[1741]: time="2024-02-09T19:14:56.447939713Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:14:56.448120 env[1741]: time="2024-02-09T19:14:56.448091007Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:14:56.448274 env[1741]: time="2024-02-09T19:14:56.448244060Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:14:56.448687 env[1741]: time="2024-02-09T19:14:56.448656993Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:14:56.449140 env[1741]: time="2024-02-09T19:14:56.449083602Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:14:56.450308 env[1741]: time="2024-02-09T19:14:56.450240099Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:14:56.450853 env[1741]: time="2024-02-09T19:14:56.450765373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:14:56.451060 env[1741]: time="2024-02-09T19:14:56.451027200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:14:56.451331 env[1741]: time="2024-02-09T19:14:56.451299751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:14:56.451492 env[1741]: time="2024-02-09T19:14:56.451463164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:14:56.451945 env[1741]: time="2024-02-09T19:14:56.451909877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:14:56.452104 env[1741]: time="2024-02-09T19:14:56.452074960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:14:56.452263 env[1741]: time="2024-02-09T19:14:56.452233601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:14:56.452418 env[1741]: time="2024-02-09T19:14:56.452388010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:14:56.453040 env[1741]: time="2024-02-09T19:14:56.452994595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:14:56.453211 env[1741]: time="2024-02-09T19:14:56.453180774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:14:56.455932 env[1741]: time="2024-02-09T19:14:56.455870573Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:14:56.456655 env[1741]: time="2024-02-09T19:14:56.456617929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:14:56.460241 env[1741]: time="2024-02-09T19:14:56.460167275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:14:56.460480 env[1741]: time="2024-02-09T19:14:56.460447536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:14:56.460633 env[1741]: time="2024-02-09T19:14:56.460602548Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:14:56.460844 env[1741]: time="2024-02-09T19:14:56.460806484Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:14:56.460999 env[1741]: time="2024-02-09T19:14:56.460969319Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:14:56.461151 env[1741]: time="2024-02-09T19:14:56.461119609Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:14:56.461352 env[1741]: time="2024-02-09T19:14:56.461302901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:14:56.463395 env[1741]: time="2024-02-09T19:14:56.463230777Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:14:56.464602 env[1741]: time="2024-02-09T19:14:56.463957928Z" level=info msg="Connect containerd service" Feb 9 19:14:56.464602 env[1741]: time="2024-02-09T19:14:56.464069289Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:14:56.466225 env[1741]: time="2024-02-09T19:14:56.466169519Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:14:56.471068 env[1741]: time="2024-02-09T19:14:56.470977693Z" level=info msg="Start subscribing containerd event" Feb 9 19:14:56.473960 env[1741]: time="2024-02-09T19:14:56.473892852Z" level=info msg="Start recovering state" Feb 9 19:14:56.474322 env[1741]: time="2024-02-09T19:14:56.474282390Z" level=info msg="Start event monitor" Feb 9 19:14:56.474472 env[1741]: time="2024-02-09T19:14:56.474443994Z" level=info msg="Start snapshots syncer" Feb 9 19:14:56.475113 env[1741]: time="2024-02-09T19:14:56.475079010Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:14:56.475265 env[1741]: time="2024-02-09T19:14:56.475236596Z" level=info msg="Start streaming server" Feb 9 19:14:56.476416 env[1741]: time="2024-02-09T19:14:56.476361209Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:14:56.477908 env[1741]: time="2024-02-09T19:14:56.477863267Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:14:56.492307 env[1741]: time="2024-02-09T19:14:56.492235463Z" level=info msg="containerd successfully booted in 0.387199s" Feb 9 19:14:56.492392 systemd[1]: Started containerd.service. Feb 9 19:14:56.655482 tar[1738]: ./portmap Feb 9 19:14:56.810724 coreos-metadata[1715]: Feb 09 19:14:56.810 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 19:14:56.812054 coreos-metadata[1715]: Feb 09 19:14:56.811 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 19:14:56.815019 coreos-metadata[1715]: Feb 09 19:14:56.814 INFO Fetch successful Feb 9 19:14:56.815898 coreos-metadata[1715]: Feb 09 19:14:56.815 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 19:14:56.816476 coreos-metadata[1715]: Feb 09 19:14:56.816 INFO Fetch successful Feb 9 19:14:56.820571 unknown[1715]: wrote ssh authorized keys file for user: core Feb 9 19:14:56.848977 update-ssh-keys[1924]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:14:56.849860 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:14:56.894305 tar[1738]: ./host-local Feb 9 19:14:57.045359 tar[1738]: ./vrf Feb 9 19:14:57.080019 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO Create new startup processor Feb 9 19:14:57.082451 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 19:14:57.082635 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO Initializing bookkeeping folders Feb 9 19:14:57.082635 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO removing the completed state files Feb 9 19:14:57.082635 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO Initializing bookkeeping folders for long running plugins Feb 9 19:14:57.082635 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 19:14:57.082919 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO Initializing healthcheck folders for long running plugins Feb 9 19:14:57.082919 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO Initializing locations for inventory plugin Feb 9 19:14:57.082919 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO Initializing default location for custom inventory Feb 9 19:14:57.082919 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO Initializing default location for file inventory Feb 9 19:14:57.082919 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO Initializing default location for role inventory Feb 9 19:14:57.082919 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO Init the cloudwatchlogs publisher Feb 9 19:14:57.082919 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [instanceID=i-07ec81aaec893b305] Successfully loaded platform independent plugin aws:configurePackage Feb 9 19:14:57.083286 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [instanceID=i-07ec81aaec893b305] Successfully loaded platform independent plugin aws:runDocument Feb 9 19:14:57.083286 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [instanceID=i-07ec81aaec893b305] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 19:14:57.083286 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [instanceID=i-07ec81aaec893b305] Successfully loaded platform independent plugin aws:configureDocker Feb 9 19:14:57.083286 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [instanceID=i-07ec81aaec893b305] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 19:14:57.083286 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [instanceID=i-07ec81aaec893b305] Successfully loaded platform independent plugin aws:downloadContent Feb 9 19:14:57.083286 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [instanceID=i-07ec81aaec893b305] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 19:14:57.083286 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [instanceID=i-07ec81aaec893b305] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 19:14:57.083286 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [instanceID=i-07ec81aaec893b305] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 19:14:57.083286 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [instanceID=i-07ec81aaec893b305] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 19:14:57.083286 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 19:14:57.083836 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO OS: linux, Arch: arm64 Feb 9 19:14:57.088611 amazon-ssm-agent[1815]: datastore file /var/lib/amazon/ssm/i-07ec81aaec893b305/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 19:14:57.094290 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 19:14:57.191949 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 19:14:57.214023 tar[1738]: ./bridge Feb 9 19:14:57.287092 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 19:14:57.367175 tar[1738]: ./tuning Feb 9 19:14:57.381708 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessagingDeliveryService] Starting message polling Feb 9 19:14:57.476436 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 19:14:57.482832 tar[1738]: ./firewall Feb 9 19:14:57.571363 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [instanceID=i-07ec81aaec893b305] Starting association polling Feb 9 19:14:57.633444 tar[1738]: ./host-device Feb 9 19:14:57.666499 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 19:14:57.761855 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 19:14:57.771939 tar[1738]: ./sbr Feb 9 19:14:57.857368 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 19:14:57.889591 tar[1738]: ./loopback Feb 9 19:14:57.953095 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 19:14:57.958380 tar[1747]: linux-arm64/LICENSE Feb 9 19:14:57.959016 tar[1747]: linux-arm64/README.md Feb 9 19:14:57.972069 systemd[1]: Finished prepare-helm.service. Feb 9 19:14:58.014214 tar[1738]: ./dhcp Feb 9 19:14:58.049167 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 19:14:58.063047 systemd[1]: Finished prepare-critools.service. Feb 9 19:14:58.145336 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 19:14:58.195160 tar[1738]: ./ptp Feb 9 19:14:58.242204 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 19:14:58.259864 tar[1738]: ./ipvlan Feb 9 19:14:58.323036 tar[1738]: ./bandwidth Feb 9 19:14:58.338187 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 19:14:58.412083 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:14:58.434801 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-07ec81aaec893b305, requestId: 9f924c61-2678-41eb-8b9f-3b2060431abf Feb 9 19:14:58.480465 locksmithd[1799]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:14:58.531713 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [OfflineService] Starting document processing engine... Feb 9 19:14:58.629483 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [OfflineService] [EngineProcessor] Starting Feb 9 19:14:58.727335 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 19:14:58.825485 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [OfflineService] Starting message polling Feb 9 19:14:58.923828 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [OfflineService] Starting send replies to MDS Feb 9 19:14:59.022252 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 19:14:59.120460 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 19:14:59.218794 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 19:14:59.319121 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessageGatewayService] listening reply. Feb 9 19:14:59.417913 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 19:14:59.517650 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [StartupProcessor] Executing startup processor tasks Feb 9 19:14:59.617478 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 19:14:59.716704 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 19:14:59.817262 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 19:14:59.917166 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-07ec81aaec893b305?role=subscribe&stream=input Feb 9 19:15:00.017348 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-07ec81aaec893b305?role=subscribe&stream=input Feb 9 19:15:00.117364 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 19:15:00.217855 amazon-ssm-agent[1815]: 2024-02-09 19:14:57 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 19:15:01.784432 amazon-ssm-agent[1815]: 2024-02-09 19:15:01 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 19:15:02.472665 sshd_keygen[1769]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:15:02.509151 systemd[1]: Finished sshd-keygen.service. Feb 9 19:15:02.514268 systemd[1]: Starting issuegen.service... Feb 9 19:15:02.520000 systemd[1]: Started sshd@0-172.31.18.155:22-147.75.109.163:48632.service. Feb 9 19:15:02.532704 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:15:02.533268 systemd[1]: Finished issuegen.service. Feb 9 19:15:02.538320 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:15:02.552544 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:15:02.557543 systemd[1]: Started getty@tty1.service. Feb 9 19:15:02.565584 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:15:02.568379 systemd[1]: Reached target getty.target. Feb 9 19:15:02.573162 systemd[1]: Reached target multi-user.target. Feb 9 19:15:02.579900 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:15:02.599737 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:15:02.600327 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:15:02.602616 systemd[1]: Startup finished in 12.450s (kernel) + 14.694s (userspace) = 27.144s. Feb 9 19:15:02.747678 sshd[1954]: Accepted publickey for core from 147.75.109.163 port 48632 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:02.752292 sshd[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:02.768823 systemd[1]: Created slice user-500.slice. Feb 9 19:15:02.771046 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:15:02.776444 systemd-logind[1733]: New session 1 of user core. Feb 9 19:15:02.789813 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:15:02.794345 systemd[1]: Starting user@500.service... Feb 9 19:15:02.800404 (systemd)[1968]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:02.978519 systemd[1968]: Queued start job for default target default.target. Feb 9 19:15:02.979733 systemd[1968]: Reached target paths.target. Feb 9 19:15:02.979827 systemd[1968]: Reached target sockets.target. Feb 9 19:15:02.979863 systemd[1968]: Reached target timers.target. Feb 9 19:15:02.979894 systemd[1968]: Reached target basic.target. Feb 9 19:15:02.979995 systemd[1968]: Reached target default.target. Feb 9 19:15:02.980063 systemd[1968]: Startup finished in 166ms. Feb 9 19:15:02.980581 systemd[1]: Started user@500.service. Feb 9 19:15:02.982674 systemd[1]: Started session-1.scope. Feb 9 19:15:03.130319 systemd[1]: Started sshd@1-172.31.18.155:22-147.75.109.163:44926.service. Feb 9 19:15:03.310374 sshd[1977]: Accepted publickey for core from 147.75.109.163 port 44926 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:03.312899 sshd[1977]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:03.321317 systemd-logind[1733]: New session 2 of user core. Feb 9 19:15:03.322288 systemd[1]: Started session-2.scope. Feb 9 19:15:03.459897 sshd[1977]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:03.464924 systemd[1]: sshd@1-172.31.18.155:22-147.75.109.163:44926.service: Deactivated successfully. Feb 9 19:15:03.466651 systemd-logind[1733]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:15:03.466854 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:15:03.469263 systemd-logind[1733]: Removed session 2. Feb 9 19:15:03.486276 systemd[1]: Started sshd@2-172.31.18.155:22-147.75.109.163:44938.service. Feb 9 19:15:03.659750 sshd[1984]: Accepted publickey for core from 147.75.109.163 port 44938 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:03.662928 sshd[1984]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:03.670995 systemd-logind[1733]: New session 3 of user core. Feb 9 19:15:03.671469 systemd[1]: Started session-3.scope. Feb 9 19:15:03.800252 sshd[1984]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:03.805712 systemd-logind[1733]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:15:03.806408 systemd[1]: sshd@2-172.31.18.155:22-147.75.109.163:44938.service: Deactivated successfully. Feb 9 19:15:03.807912 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:15:03.810338 systemd-logind[1733]: Removed session 3. Feb 9 19:15:03.826421 systemd[1]: Started sshd@3-172.31.18.155:22-147.75.109.163:44946.service. Feb 9 19:15:04.002011 sshd[1991]: Accepted publickey for core from 147.75.109.163 port 44946 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:04.005215 sshd[1991]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:04.013880 systemd-logind[1733]: New session 4 of user core. Feb 9 19:15:04.014703 systemd[1]: Started session-4.scope. Feb 9 19:15:04.150835 sshd[1991]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:04.156723 systemd[1]: sshd@3-172.31.18.155:22-147.75.109.163:44946.service: Deactivated successfully. Feb 9 19:15:04.159636 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:15:04.160840 systemd-logind[1733]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:15:04.164351 systemd-logind[1733]: Removed session 4. Feb 9 19:15:04.176602 systemd[1]: Started sshd@4-172.31.18.155:22-147.75.109.163:44956.service. Feb 9 19:15:04.352126 sshd[1998]: Accepted publickey for core from 147.75.109.163 port 44956 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:04.355317 sshd[1998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:04.364641 systemd[1]: Started session-5.scope. Feb 9 19:15:04.367262 systemd-logind[1733]: New session 5 of user core. Feb 9 19:15:04.487693 sudo[2002]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 19:15:04.488709 sudo[2002]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:15:04.501032 dbus-daemon[1717]: avc: received setenforce notice (enforcing=1) Feb 9 19:15:04.504129 sudo[2002]: pam_unix(sudo:session): session closed for user root Feb 9 19:15:04.529146 sshd[1998]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:04.534787 systemd[1]: sshd@4-172.31.18.155:22-147.75.109.163:44956.service: Deactivated successfully. Feb 9 19:15:04.537633 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:15:04.538274 systemd-logind[1733]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:15:04.540736 systemd-logind[1733]: Removed session 5. Feb 9 19:15:04.555248 systemd[1]: Started sshd@5-172.31.18.155:22-147.75.109.163:49396.service. Feb 9 19:15:04.729461 sshd[2006]: Accepted publickey for core from 147.75.109.163 port 49396 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:04.732548 sshd[2006]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:04.741040 systemd[1]: Started session-6.scope. Feb 9 19:15:04.742583 systemd-logind[1733]: New session 6 of user core. Feb 9 19:15:04.852935 sudo[2011]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 19:15:04.853443 sudo[2011]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:15:04.858957 sudo[2011]: pam_unix(sudo:session): session closed for user root Feb 9 19:15:04.868040 sudo[2010]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 19:15:04.869109 sudo[2010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:15:04.886475 systemd[1]: Stopping audit-rules.service... Feb 9 19:15:04.886000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:15:04.890881 kernel: kauditd_printk_skb: 37 callbacks suppressed Feb 9 19:15:04.890962 kernel: audit: type=1305 audit(1707506104.886:130): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:15:04.891402 auditctl[2014]: No rules Feb 9 19:15:04.892711 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 19:15:04.893245 systemd[1]: Stopped audit-rules.service. Feb 9 19:15:04.897014 systemd[1]: Starting audit-rules.service... Feb 9 19:15:04.910428 kernel: audit: type=1300 audit(1707506104.886:130): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc90b7540 a2=420 a3=0 items=0 ppid=1 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:04.886000 audit[2014]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc90b7540 a2=420 a3=0 items=0 ppid=1 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:04.886000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:15:04.915899 kernel: audit: type=1327 audit(1707506104.886:130): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:15:04.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:04.924148 kernel: audit: type=1131 audit(1707506104.890:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:04.940321 augenrules[2032]: No rules Feb 9 19:15:04.942159 systemd[1]: Finished audit-rules.service. Feb 9 19:15:04.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:04.954006 kernel: audit: type=1130 audit(1707506104.940:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:04.954147 kernel: audit: type=1106 audit(1707506104.951:133): pid=2010 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:15:04.951000 audit[2010]: USER_END pid=2010 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:15:04.952805 sudo[2010]: pam_unix(sudo:session): session closed for user root Feb 9 19:15:04.951000 audit[2010]: CRED_DISP pid=2010 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:15:04.971298 kernel: audit: type=1104 audit(1707506104.951:134): pid=2010 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:15:04.985339 sshd[2006]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:04.986000 audit[2006]: USER_END pid=2006 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:15:04.990643 systemd-logind[1733]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:15:04.992578 systemd[1]: sshd@5-172.31.18.155:22-147.75.109.163:49396.service: Deactivated successfully. Feb 9 19:15:04.993954 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:15:04.997175 systemd-logind[1733]: Removed session 6. Feb 9 19:15:04.986000 audit[2006]: CRED_DISP pid=2006 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:15:05.008316 kernel: audit: type=1106 audit(1707506104.986:135): pid=2006 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:15:05.008448 kernel: audit: type=1104 audit(1707506104.986:136): pid=2006 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:15:04.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.18.155:22-147.75.109.163:49396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:05.015466 systemd[1]: Started sshd@6-172.31.18.155:22-147.75.109.163:49408.service. Feb 9 19:15:05.017656 kernel: audit: type=1131 audit(1707506104.992:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.18.155:22-147.75.109.163:49396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:05.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.155:22-147.75.109.163:49408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:05.190000 audit[2039]: USER_ACCT pid=2039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:15:05.191436 sshd[2039]: Accepted publickey for core from 147.75.109.163 port 49408 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:05.192000 audit[2039]: CRED_ACQ pid=2039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:15:05.192000 audit[2039]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe50ad560 a2=3 a3=1 items=0 ppid=1 pid=2039 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:05.192000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:15:05.194348 sshd[2039]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:05.203057 systemd[1]: Started session-7.scope. Feb 9 19:15:05.205495 systemd-logind[1733]: New session 7 of user core. Feb 9 19:15:05.215000 audit[2039]: USER_START pid=2039 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:15:05.217000 audit[2042]: CRED_ACQ pid=2042 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:15:05.313000 audit[2043]: USER_ACCT pid=2043 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:15:05.315193 sudo[2043]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:15:05.315000 audit[2043]: CRED_REFR pid=2043 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:15:05.316481 sudo[2043]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:15:05.319000 audit[2043]: USER_START pid=2043 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:15:05.998837 systemd[1]: Starting docker.service... Feb 9 19:15:06.073533 env[2058]: time="2024-02-09T19:15:06.073466761Z" level=info msg="Starting up" Feb 9 19:15:06.076566 env[2058]: time="2024-02-09T19:15:06.076494363Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:15:06.076566 env[2058]: time="2024-02-09T19:15:06.076544651Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:15:06.076760 env[2058]: time="2024-02-09T19:15:06.076587541Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:15:06.076760 env[2058]: time="2024-02-09T19:15:06.076612636Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:15:06.080043 env[2058]: time="2024-02-09T19:15:06.079982083Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:15:06.080043 env[2058]: time="2024-02-09T19:15:06.080026564Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:15:06.080246 env[2058]: time="2024-02-09T19:15:06.080061899Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:15:06.080246 env[2058]: time="2024-02-09T19:15:06.080089399Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:15:06.559304 env[2058]: time="2024-02-09T19:15:06.559235848Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 19:15:06.559304 env[2058]: time="2024-02-09T19:15:06.559281677Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 19:15:06.559679 env[2058]: time="2024-02-09T19:15:06.559524939Z" level=info msg="Loading containers: start." Feb 9 19:15:06.635000 audit[2089]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.635000 audit[2089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff1defb50 a2=0 a3=1 items=0 ppid=2058 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.635000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 9 19:15:06.639000 audit[2091]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.639000 audit[2091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe51597b0 a2=0 a3=1 items=0 ppid=2058 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.639000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 9 19:15:06.643000 audit[2093]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.643000 audit[2093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffeaae7630 a2=0 a3=1 items=0 ppid=2058 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.643000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 19:15:06.647000 audit[2095]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.647000 audit[2095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc960dda0 a2=0 a3=1 items=0 ppid=2058 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.647000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 19:15:06.651000 audit[2097]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.651000 audit[2097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd43cbe90 a2=0 a3=1 items=0 ppid=2058 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.651000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 9 19:15:06.671000 audit[2102]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.671000 audit[2102]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffefc27ea0 a2=0 a3=1 items=0 ppid=2058 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.671000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 9 19:15:06.683000 audit[2104]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.683000 audit[2104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc3e25a10 a2=0 a3=1 items=0 ppid=2058 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.683000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 9 19:15:06.686000 audit[2106]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.686000 audit[2106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd6d39bc0 a2=0 a3=1 items=0 ppid=2058 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.686000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 9 19:15:06.690000 audit[2108]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.690000 audit[2108]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffd93198c0 a2=0 a3=1 items=0 ppid=2058 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.690000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:15:06.708000 audit[2112]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.708000 audit[2112]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffea634000 a2=0 a3=1 items=0 ppid=2058 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.708000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:15:06.710000 audit[2113]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.710000 audit[2113]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffbf404f0 a2=0 a3=1 items=0 ppid=2058 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.710000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:15:06.721818 kernel: Initializing XFRM netlink socket Feb 9 19:15:06.764221 env[2058]: time="2024-02-09T19:15:06.764153794Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:15:06.766024 (udev-worker)[2069]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:15:06.800000 audit[2121]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.800000 audit[2121]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffe6fae130 a2=0 a3=1 items=0 ppid=2058 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.800000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 9 19:15:06.816000 audit[2124]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.816000 audit[2124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffcd06cfa0 a2=0 a3=1 items=0 ppid=2058 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.816000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 9 19:15:06.822000 audit[2127]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.822000 audit[2127]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffeaad3d10 a2=0 a3=1 items=0 ppid=2058 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.822000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 9 19:15:06.826000 audit[2129]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.826000 audit[2129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffda0f0690 a2=0 a3=1 items=0 ppid=2058 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.826000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 9 19:15:06.832000 audit[2131]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.832000 audit[2131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffd90ced90 a2=0 a3=1 items=0 ppid=2058 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.832000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 9 19:15:06.836000 audit[2133]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.836000 audit[2133]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffc05cca70 a2=0 a3=1 items=0 ppid=2058 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.836000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 9 19:15:06.840000 audit[2135]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.840000 audit[2135]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffec4550b0 a2=0 a3=1 items=0 ppid=2058 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.840000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 9 19:15:06.852000 audit[2138]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.852000 audit[2138]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffead6a140 a2=0 a3=1 items=0 ppid=2058 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.852000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 9 19:15:06.857000 audit[2140]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.857000 audit[2140]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffd36f7ff0 a2=0 a3=1 items=0 ppid=2058 pid=2140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.857000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 19:15:06.861000 audit[2142]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.861000 audit[2142]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffce651b40 a2=0 a3=1 items=0 ppid=2058 pid=2142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.861000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 19:15:06.865000 audit[2144]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.865000 audit[2144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe71661a0 a2=0 a3=1 items=0 ppid=2058 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.865000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 9 19:15:06.868379 systemd-networkd[1533]: docker0: Link UP Feb 9 19:15:06.882000 audit[2148]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2148 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.882000 audit[2148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd75f1750 a2=0 a3=1 items=0 ppid=2058 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.882000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:15:06.884000 audit[2149]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:06.884000 audit[2149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd46b1340 a2=0 a3=1 items=0 ppid=2058 pid=2149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:06.884000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:15:06.886304 env[2058]: time="2024-02-09T19:15:06.886266913Z" level=info msg="Loading containers: done." Feb 9 19:15:06.909860 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1406428006-merged.mount: Deactivated successfully. Feb 9 19:15:06.925092 env[2058]: time="2024-02-09T19:15:06.925028834Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:15:06.925708 env[2058]: time="2024-02-09T19:15:06.925657997Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:15:06.926078 env[2058]: time="2024-02-09T19:15:06.926052692Z" level=info msg="Daemon has completed initialization" Feb 9 19:15:06.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:06.953023 systemd[1]: Started docker.service. Feb 9 19:15:06.964950 env[2058]: time="2024-02-09T19:15:06.964869236Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:15:06.999209 systemd[1]: Reloading. Feb 9 19:15:07.113519 /usr/lib/systemd/system-generators/torcx-generator[2200]: time="2024-02-09T19:15:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:15:07.113583 /usr/lib/systemd/system-generators/torcx-generator[2200]: time="2024-02-09T19:15:07Z" level=info msg="torcx already run" Feb 9 19:15:07.293453 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:15:07.293707 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:15:07.336897 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:15:07.537937 systemd[1]: Started kubelet.service. Feb 9 19:15:07.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:07.677735 kubelet[2257]: E0209 19:15:07.677635 2257 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:15:07.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:15:07.681592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:15:07.682045 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:15:08.107780 env[1741]: time="2024-02-09T19:15:08.107676837Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:15:08.776380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4178058303.mount: Deactivated successfully. Feb 9 19:15:12.274500 env[1741]: time="2024-02-09T19:15:12.274420801Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:12.277656 env[1741]: time="2024-02-09T19:15:12.277596277Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:12.281137 env[1741]: time="2024-02-09T19:15:12.281083884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:12.284380 env[1741]: time="2024-02-09T19:15:12.284319074Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:12.286155 env[1741]: time="2024-02-09T19:15:12.286111937Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 19:15:12.303018 env[1741]: time="2024-02-09T19:15:12.302967494Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:15:15.695644 env[1741]: time="2024-02-09T19:15:15.695576184Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:15.698592 env[1741]: time="2024-02-09T19:15:15.698531555Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:15.702087 env[1741]: time="2024-02-09T19:15:15.702026719Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:15.705566 env[1741]: time="2024-02-09T19:15:15.705512573Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:15.707154 env[1741]: time="2024-02-09T19:15:15.707104768Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 19:15:15.725369 env[1741]: time="2024-02-09T19:15:15.725320952Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:15:17.400641 env[1741]: time="2024-02-09T19:15:17.400576909Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:17.403300 env[1741]: time="2024-02-09T19:15:17.403252137Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:17.406495 env[1741]: time="2024-02-09T19:15:17.406428799Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:17.409923 env[1741]: time="2024-02-09T19:15:17.409873586Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:17.411483 env[1741]: time="2024-02-09T19:15:17.411430515Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 19:15:17.428134 env[1741]: time="2024-02-09T19:15:17.428084086Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:15:17.933547 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:15:17.933896 systemd[1]: Stopped kubelet.service. Feb 9 19:15:17.945735 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 9 19:15:17.945901 kernel: audit: type=1130 audit(1707506117.932:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:17.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:17.936898 systemd[1]: Started kubelet.service. Feb 9 19:15:17.954289 kernel: audit: type=1131 audit(1707506117.932:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:17.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:17.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:17.962363 kernel: audit: type=1130 audit(1707506117.935:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:18.057433 kubelet[2293]: E0209 19:15:18.057325 2293 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:15:18.065264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:15:18.065662 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:15:18.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:15:18.075780 kernel: audit: type=1131 audit(1707506118.065:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:15:18.989961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70628380.mount: Deactivated successfully. Feb 9 19:15:19.661090 env[1741]: time="2024-02-09T19:15:19.661023726Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:19.664199 env[1741]: time="2024-02-09T19:15:19.664148897Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:19.667428 env[1741]: time="2024-02-09T19:15:19.667372825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:19.669276 env[1741]: time="2024-02-09T19:15:19.669236508Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:19.670519 env[1741]: time="2024-02-09T19:15:19.670457409Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 19:15:19.689045 env[1741]: time="2024-02-09T19:15:19.688994327Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:15:20.212465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3733391864.mount: Deactivated successfully. Feb 9 19:15:20.222564 env[1741]: time="2024-02-09T19:15:20.222507942Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.225425 env[1741]: time="2024-02-09T19:15:20.225380868Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.227881 env[1741]: time="2024-02-09T19:15:20.227823697Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.230540 env[1741]: time="2024-02-09T19:15:20.230495833Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.231594 env[1741]: time="2024-02-09T19:15:20.231545925Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 19:15:20.249338 env[1741]: time="2024-02-09T19:15:20.249291378Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:15:21.259162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1467452822.mount: Deactivated successfully. Feb 9 19:15:25.132456 env[1741]: time="2024-02-09T19:15:25.132376545Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:25.136551 env[1741]: time="2024-02-09T19:15:25.136492273Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:25.140197 env[1741]: time="2024-02-09T19:15:25.140138095Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:25.144953 env[1741]: time="2024-02-09T19:15:25.144872517Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:25.145606 env[1741]: time="2024-02-09T19:15:25.145560878Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 19:15:25.161292 env[1741]: time="2024-02-09T19:15:25.161235636Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:15:25.828260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4036962359.mount: Deactivated successfully. Feb 9 19:15:26.255721 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 19:15:26.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:26.266856 kernel: audit: type=1131 audit(1707506126.255:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:26.497410 env[1741]: time="2024-02-09T19:15:26.497354695Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:26.500075 env[1741]: time="2024-02-09T19:15:26.500027038Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:26.504550 env[1741]: time="2024-02-09T19:15:26.504502286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:26.507320 env[1741]: time="2024-02-09T19:15:26.506802470Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:26.508262 env[1741]: time="2024-02-09T19:15:26.508210384Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 19:15:28.176546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:15:28.176931 systemd[1]: Stopped kubelet.service. Feb 9 19:15:28.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:28.179739 systemd[1]: Started kubelet.service. Feb 9 19:15:28.200538 kernel: audit: type=1130 audit(1707506128.175:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:28.200709 kernel: audit: type=1131 audit(1707506128.175:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:28.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:28.214457 kernel: audit: type=1130 audit(1707506128.178:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:28.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:28.327077 kubelet[2373]: E0209 19:15:28.327004 2373 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:15:28.331205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:15:28.331601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:15:28.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:15:28.344781 kernel: audit: type=1131 audit(1707506128.330:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:15:31.817203 amazon-ssm-agent[1815]: 2024-02-09 19:15:31 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 19:15:32.584199 systemd[1]: Stopped kubelet.service. Feb 9 19:15:32.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:32.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:32.601379 kernel: audit: type=1130 audit(1707506132.582:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:32.601458 kernel: audit: type=1131 audit(1707506132.582:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:32.628400 systemd[1]: Reloading. Feb 9 19:15:32.734905 /usr/lib/systemd/system-generators/torcx-generator[2403]: time="2024-02-09T19:15:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:15:32.734967 /usr/lib/systemd/system-generators/torcx-generator[2403]: time="2024-02-09T19:15:32Z" level=info msg="torcx already run" Feb 9 19:15:32.914280 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:15:32.914580 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:15:32.957787 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:15:33.144046 systemd[1]: Started kubelet.service. Feb 9 19:15:33.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:33.154789 kernel: audit: type=1130 audit(1707506133.144:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:33.242084 kubelet[2466]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:15:33.242084 kubelet[2466]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:33.242674 kubelet[2466]: I0209 19:15:33.242210 2466 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:15:33.244766 kubelet[2466]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:15:33.244766 kubelet[2466]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:33.850371 kubelet[2466]: I0209 19:15:33.850332 2466 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:15:33.850595 kubelet[2466]: I0209 19:15:33.850574 2466 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:15:33.851136 kubelet[2466]: I0209 19:15:33.851109 2466 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:15:33.856979 kubelet[2466]: E0209 19:15:33.856935 2466 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.155:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:33.857181 kubelet[2466]: I0209 19:15:33.857014 2466 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:15:33.860797 kubelet[2466]: W0209 19:15:33.860739 2466 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 19:15:33.862228 kubelet[2466]: I0209 19:15:33.862178 2466 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:15:33.863172 kubelet[2466]: I0209 19:15:33.863149 2466 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:15:33.863419 kubelet[2466]: I0209 19:15:33.863387 2466 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:15:33.863658 kubelet[2466]: I0209 19:15:33.863636 2466 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:15:33.863816 kubelet[2466]: I0209 19:15:33.863748 2466 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:15:33.864094 kubelet[2466]: I0209 19:15:33.864075 2466 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:33.869868 kubelet[2466]: I0209 19:15:33.869827 2466 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:15:33.870644 kubelet[2466]: I0209 19:15:33.870619 2466 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:15:33.870920 kubelet[2466]: W0209 19:15:33.870724 2466 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.18.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-155&limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:33.871065 kubelet[2466]: E0209 19:15:33.871040 2466 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-155&limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:33.871217 kubelet[2466]: I0209 19:15:33.871197 2466 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:15:33.878994 kubelet[2466]: I0209 19:15:33.878943 2466 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:15:33.880456 kubelet[2466]: W0209 19:15:33.880362 2466 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.18.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:33.880684 kubelet[2466]: E0209 19:15:33.880661 2466 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:33.881841 kubelet[2466]: I0209 19:15:33.881805 2466 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:15:33.882662 kubelet[2466]: W0209 19:15:33.882632 2466 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:15:33.883621 kubelet[2466]: I0209 19:15:33.883587 2466 server.go:1186] "Started kubelet" Feb 9 19:15:33.888198 kubelet[2466]: I0209 19:15:33.888133 2466 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:15:33.887000 audit[2466]: AVC avc: denied { mac_admin } for pid=2466 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:15:33.895721 kubelet[2466]: I0209 19:15:33.889920 2466 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:15:33.895721 kubelet[2466]: E0209 19:15:33.891887 2466 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:15:33.895721 kubelet[2466]: E0209 19:15:33.891925 2466 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:15:33.895721 kubelet[2466]: E0209 19:15:33.892033 2466 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-18-155.17b247c831eddf5b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-18-155", UID:"ip-172-31-18-155", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-18-155"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 33, 883543387, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 33, 883543387, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.18.155:6443/api/v1/namespaces/default/events": dial tcp 172.31.18.155:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:15:33.896070 kubelet[2466]: I0209 19:15:33.895121 2466 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:15:33.896070 kubelet[2466]: I0209 19:15:33.895192 2466 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:15:33.896070 kubelet[2466]: I0209 19:15:33.895335 2466 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:15:33.901955 kernel: audit: type=1400 audit(1707506133.887:186): avc: denied { mac_admin } for pid=2466 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:15:33.902077 kernel: audit: type=1401 audit(1707506133.887:186): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:15:33.887000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:15:33.902250 kubelet[2466]: E0209 19:15:33.900301 2466 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-18-155\" not found" Feb 9 19:15:33.902250 kubelet[2466]: I0209 19:15:33.900367 2466 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:15:33.902250 kubelet[2466]: I0209 19:15:33.900463 2466 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:15:33.902250 kubelet[2466]: W0209 19:15:33.901036 2466 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.18.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:33.902250 kubelet[2466]: E0209 19:15:33.901098 2466 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:33.902250 kubelet[2466]: E0209 19:15:33.901422 2466 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.18.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-155?timeout=10s": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:33.887000 audit[2466]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40010d0270 a1=4000ac44e0 a2=40010d0240 a3=25 items=0 ppid=1 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:33.915642 kernel: audit: type=1300 audit(1707506133.887:186): arch=c00000b7 syscall=5 success=no exit=-22 a0=40010d0270 a1=4000ac44e0 a2=40010d0240 a3=25 items=0 ppid=1 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:33.887000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:15:33.926810 kernel: audit: type=1327 audit(1707506133.887:186): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:15:33.892000 audit[2466]: AVC avc: denied { mac_admin } for pid=2466 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:15:33.934432 kernel: audit: type=1400 audit(1707506133.892:187): avc: denied { mac_admin } for pid=2466 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:15:33.892000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:15:33.941013 kernel: audit: type=1401 audit(1707506133.892:187): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:15:33.941117 kernel: audit: type=1300 audit(1707506133.892:187): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000fc6920 a1=4000ac44f8 a2=40010d0300 a3=25 items=0 ppid=1 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:33.892000 audit[2466]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000fc6920 a1=4000ac44f8 a2=40010d0300 a3=25 items=0 ppid=1 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:33.892000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:15:33.904000 audit[2477]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:33.904000 audit[2477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd8456620 a2=0 a3=1 items=0 ppid=2466 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:33.904000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:15:33.926000 audit[2478]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:33.926000 audit[2478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdba78290 a2=0 a3=1 items=0 ppid=2466 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:33.926000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:15:33.961000 audit[2480]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:33.961000 audit[2480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff9094130 a2=0 a3=1 items=0 ppid=2466 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:33.961000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:15:33.967000 audit[2484]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:33.967000 audit[2484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe9eba1e0 a2=0 a3=1 items=0 ppid=2466 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:33.967000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:15:33.983000 audit[2487]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:33.983000 audit[2487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffcec76530 a2=0 a3=1 items=0 ppid=2466 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:33.983000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 19:15:33.990000 audit[2489]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:33.990000 audit[2489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd25584e0 a2=0 a3=1 items=0 ppid=2466 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:33.990000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:15:34.011088 kubelet[2466]: I0209 19:15:34.011040 2466 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-18-155" Feb 9 19:15:34.012433 kubelet[2466]: E0209 19:15:34.012391 2466 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.18.155:6443/api/v1/nodes\": dial tcp 172.31.18.155:6443: connect: connection refused" node="ip-172-31-18-155" Feb 9 19:15:34.011000 audit[2493]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:34.011000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffdf9c4060 a2=0 a3=1 items=0 ppid=2466 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.011000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:15:34.022000 audit[2496]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:34.022000 audit[2496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffed28d6f0 a2=0 a3=1 items=0 ppid=2466 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.022000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:15:34.024000 audit[2497]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:34.024000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffddd2ce00 a2=0 a3=1 items=0 ppid=2466 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.024000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:15:34.026000 audit[2498]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:34.026000 audit[2498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1f087e0 a2=0 a3=1 items=0 ppid=2466 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.026000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:15:34.032337 kubelet[2466]: I0209 19:15:34.032305 2466 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:15:34.032555 kubelet[2466]: I0209 19:15:34.032534 2466 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:15:34.032673 kubelet[2466]: I0209 19:15:34.032653 2466 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:34.035603 kubelet[2466]: I0209 19:15:34.035567 2466 policy_none.go:49] "None policy: Start" Feb 9 19:15:34.036791 kubelet[2466]: I0209 19:15:34.036736 2466 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:15:34.036972 kubelet[2466]: I0209 19:15:34.036950 2466 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:15:34.036000 audit[2500]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:34.036000 audit[2500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff2eb4800 a2=0 a3=1 items=0 ppid=2466 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.036000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:15:34.043000 audit[2502]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:34.043000 audit[2502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffd58c96e0 a2=0 a3=1 items=0 ppid=2466 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.043000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:15:34.047000 audit[2504]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:34.047000 audit[2504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffd3b98b10 a2=0 a3=1 items=0 ppid=2466 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.047000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:15:34.054840 kubelet[2466]: I0209 19:15:34.054792 2466 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:15:34.053000 audit[2466]: AVC avc: denied { mac_admin } for pid=2466 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:15:34.053000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:15:34.053000 audit[2466]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40011eb650 a1=400121a6f0 a2=40011eb620 a3=25 items=0 ppid=1 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.053000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:15:34.055290 kubelet[2466]: I0209 19:15:34.054944 2466 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:15:34.055290 kubelet[2466]: I0209 19:15:34.055223 2466 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:15:34.063270 kubelet[2466]: E0209 19:15:34.063226 2466 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-155\" not found" Feb 9 19:15:34.064000 audit[2506]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:34.064000 audit[2506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffd161d720 a2=0 a3=1 items=0 ppid=2466 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.064000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:15:34.069000 audit[2508]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:34.069000 audit[2508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=fffff69c4c20 a2=0 a3=1 items=0 ppid=2466 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:15:34.071526 kubelet[2466]: I0209 19:15:34.071491 2466 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:15:34.072000 audit[2509]: NETFILTER_CFG table=mangle:41 family=10 entries=2 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:15:34.072000 audit[2509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffeb891360 a2=0 a3=1 items=0 ppid=2466 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.072000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:15:34.073000 audit[2510]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:34.073000 audit[2510]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcf3b6b90 a2=0 a3=1 items=0 ppid=2466 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.073000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:15:34.074000 audit[2511]: NETFILTER_CFG table=nat:43 family=10 entries=2 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:15:34.074000 audit[2511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff55eaba0 a2=0 a3=1 items=0 ppid=2466 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.074000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:15:34.076000 audit[2512]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:34.076000 audit[2512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc6b58a0 a2=0 a3=1 items=0 ppid=2466 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.076000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:15:34.079000 audit[2514]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:15:34.079000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff1db4fc0 a2=0 a3=1 items=0 ppid=2466 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.079000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:15:34.081000 audit[2515]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_rule pid=2515 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:15:34.081000 audit[2515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd1000080 a2=0 a3=1 items=0 ppid=2466 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.081000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:15:34.083000 audit[2516]: NETFILTER_CFG table=filter:47 family=10 entries=2 op=nft_register_chain pid=2516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:15:34.083000 audit[2516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffff21dfb0 a2=0 a3=1 items=0 ppid=2466 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:15:34.087000 audit[2518]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:15:34.087000 audit[2518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffc4e72d50 a2=0 a3=1 items=0 ppid=2466 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.087000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:15:34.090000 audit[2519]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=2519 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:15:34.090000 audit[2519]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdc329020 a2=0 a3=1 items=0 ppid=2466 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.090000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:15:34.092000 audit[2520]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:15:34.092000 audit[2520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2213e20 a2=0 a3=1 items=0 ppid=2466 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.092000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:15:34.096000 audit[2522]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_rule pid=2522 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:15:34.096000 audit[2522]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff16dd750 a2=0 a3=1 items=0 ppid=2466 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.096000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:15:34.100000 audit[2524]: NETFILTER_CFG table=nat:52 family=10 entries=2 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:15:34.100000 audit[2524]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd25f2ec0 a2=0 a3=1 items=0 ppid=2466 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.100000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:15:34.105612 kubelet[2466]: E0209 19:15:34.102614 2466 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.18.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-155?timeout=10s": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:34.109000 audit[2526]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:15:34.109000 audit[2526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffe6f895a0 a2=0 a3=1 items=0 ppid=2466 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.109000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:15:34.113000 audit[2528]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:15:34.113000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffff40f3860 a2=0 a3=1 items=0 ppid=2466 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.113000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:15:34.120000 audit[2530]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=2530 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:15:34.120000 audit[2530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=fffff082b950 a2=0 a3=1 items=0 ppid=2466 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.120000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:15:34.121896 kubelet[2466]: I0209 19:15:34.121860 2466 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:15:34.121896 kubelet[2466]: I0209 19:15:34.121901 2466 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:15:34.122061 kubelet[2466]: I0209 19:15:34.121932 2466 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:15:34.122061 kubelet[2466]: E0209 19:15:34.122019 2466 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:15:34.123956 kubelet[2466]: W0209 19:15:34.123886 2466 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.18.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:34.123000 audit[2531]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:15:34.123000 audit[2531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee4faf60 a2=0 a3=1 items=0 ppid=2466 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.123000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:15:34.124513 kubelet[2466]: E0209 19:15:34.124486 2466 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:34.125000 audit[2532]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=2532 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:15:34.125000 audit[2532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffefdd5840 a2=0 a3=1 items=0 ppid=2466 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.125000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:15:34.127000 audit[2533]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:15:34.127000 audit[2533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc6cb86c0 a2=0 a3=1 items=0 ppid=2466 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:34.127000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:15:34.214142 kubelet[2466]: I0209 19:15:34.214089 2466 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-18-155" Feb 9 19:15:34.214829 kubelet[2466]: E0209 19:15:34.214803 2466 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.18.155:6443/api/v1/nodes\": dial tcp 172.31.18.155:6443: connect: connection refused" node="ip-172-31-18-155" Feb 9 19:15:34.222995 kubelet[2466]: I0209 19:15:34.222969 2466 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:34.224717 kubelet[2466]: I0209 19:15:34.224684 2466 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:34.226840 kubelet[2466]: I0209 19:15:34.226795 2466 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:34.231622 kubelet[2466]: I0209 19:15:34.231573 2466 status_manager.go:698] "Failed to get status for pod" podUID=bcef2d3a710d12e2de6bee96cf380678 pod="kube-system/kube-apiserver-ip-172-31-18-155" err="Get \"https://172.31.18.155:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-18-155\": dial tcp 172.31.18.155:6443: connect: connection refused" Feb 9 19:15:34.249554 kubelet[2466]: I0209 19:15:34.249521 2466 status_manager.go:698] "Failed to get status for pod" podUID=16513a0c7cc084f4aba820843e69ee6a pod="kube-system/kube-controller-manager-ip-172-31-18-155" err="Get \"https://172.31.18.155:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-31-18-155\": dial tcp 172.31.18.155:6443: connect: connection refused" Feb 9 19:15:34.251627 kubelet[2466]: I0209 19:15:34.251590 2466 status_manager.go:698] "Failed to get status for pod" podUID=307e543254f557e023dcc800b26e5112 pod="kube-system/kube-scheduler-ip-172-31-18-155" err="Get \"https://172.31.18.155:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-31-18-155\": dial tcp 172.31.18.155:6443: connect: connection refused" Feb 9 19:15:34.301807 kubelet[2466]: I0209 19:15:34.301732 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bcef2d3a710d12e2de6bee96cf380678-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-155\" (UID: \"bcef2d3a710d12e2de6bee96cf380678\") " pod="kube-system/kube-apiserver-ip-172-31-18-155" Feb 9 19:15:34.301973 kubelet[2466]: I0209 19:15:34.301833 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16513a0c7cc084f4aba820843e69ee6a-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-155\" (UID: \"16513a0c7cc084f4aba820843e69ee6a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-155" Feb 9 19:15:34.301973 kubelet[2466]: I0209 19:15:34.301882 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/16513a0c7cc084f4aba820843e69ee6a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-155\" (UID: \"16513a0c7cc084f4aba820843e69ee6a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-155" Feb 9 19:15:34.301973 kubelet[2466]: I0209 19:15:34.301932 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16513a0c7cc084f4aba820843e69ee6a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-155\" (UID: \"16513a0c7cc084f4aba820843e69ee6a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-155" Feb 9 19:15:34.302170 kubelet[2466]: I0209 19:15:34.301977 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/307e543254f557e023dcc800b26e5112-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-155\" (UID: \"307e543254f557e023dcc800b26e5112\") " pod="kube-system/kube-scheduler-ip-172-31-18-155" Feb 9 19:15:34.302170 kubelet[2466]: I0209 19:15:34.302020 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bcef2d3a710d12e2de6bee96cf380678-ca-certs\") pod \"kube-apiserver-ip-172-31-18-155\" (UID: \"bcef2d3a710d12e2de6bee96cf380678\") " pod="kube-system/kube-apiserver-ip-172-31-18-155" Feb 9 19:15:34.302170 kubelet[2466]: I0209 19:15:34.302064 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bcef2d3a710d12e2de6bee96cf380678-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-155\" (UID: \"bcef2d3a710d12e2de6bee96cf380678\") " pod="kube-system/kube-apiserver-ip-172-31-18-155" Feb 9 19:15:34.302170 kubelet[2466]: I0209 19:15:34.302111 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16513a0c7cc084f4aba820843e69ee6a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-155\" (UID: \"16513a0c7cc084f4aba820843e69ee6a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-155" Feb 9 19:15:34.302170 kubelet[2466]: I0209 19:15:34.302155 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/16513a0c7cc084f4aba820843e69ee6a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-155\" (UID: \"16513a0c7cc084f4aba820843e69ee6a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-155" Feb 9 19:15:34.503402 kubelet[2466]: E0209 19:15:34.503332 2466 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.18.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-155?timeout=10s": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:34.537699 env[1741]: time="2024-02-09T19:15:34.537623764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-155,Uid:bcef2d3a710d12e2de6bee96cf380678,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:34.538636 env[1741]: time="2024-02-09T19:15:34.538551783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-155,Uid:307e543254f557e023dcc800b26e5112,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:34.541289 env[1741]: time="2024-02-09T19:15:34.541211248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-155,Uid:16513a0c7cc084f4aba820843e69ee6a,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:34.617550 kubelet[2466]: I0209 19:15:34.617504 2466 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-18-155" Feb 9 19:15:34.617989 kubelet[2466]: E0209 19:15:34.617959 2466 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.18.155:6443/api/v1/nodes\": dial tcp 172.31.18.155:6443: connect: connection refused" node="ip-172-31-18-155" Feb 9 19:15:34.765231 kubelet[2466]: E0209 19:15:34.764990 2466 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-18-155.17b247c831eddf5b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-18-155", UID:"ip-172-31-18-155", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-18-155"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 33, 883543387, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 33, 883543387, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.18.155:6443/api/v1/namespaces/default/events": dial tcp 172.31.18.155:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:15:34.832294 kubelet[2466]: W0209 19:15:34.832201 2466 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.18.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:34.832294 kubelet[2466]: E0209 19:15:34.832294 2466 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:34.962394 kubelet[2466]: W0209 19:15:34.962302 2466 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.18.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:34.962394 kubelet[2466]: E0209 19:15:34.962396 2466 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:34.994133 kubelet[2466]: W0209 19:15:34.994050 2466 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.18.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-155&limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:34.994294 kubelet[2466]: E0209 19:15:34.994147 2466 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-155&limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:35.072125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851411758.mount: Deactivated successfully. Feb 9 19:15:35.081053 env[1741]: time="2024-02-09T19:15:35.081001387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:35.089426 env[1741]: time="2024-02-09T19:15:35.089374704Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:35.091539 env[1741]: time="2024-02-09T19:15:35.091492837Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:35.094041 env[1741]: time="2024-02-09T19:15:35.093996318Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:35.096144 env[1741]: time="2024-02-09T19:15:35.096103223Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:35.097618 env[1741]: time="2024-02-09T19:15:35.097580649Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:35.099426 env[1741]: time="2024-02-09T19:15:35.099362607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:35.103667 env[1741]: time="2024-02-09T19:15:35.103597011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:35.105075 env[1741]: time="2024-02-09T19:15:35.105016112Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:35.108384 env[1741]: time="2024-02-09T19:15:35.106640460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:35.137964 env[1741]: time="2024-02-09T19:15:35.137895441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:35.143112 env[1741]: time="2024-02-09T19:15:35.143033391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:35.146676 env[1741]: time="2024-02-09T19:15:35.146262550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:35.146676 env[1741]: time="2024-02-09T19:15:35.146341805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:35.146676 env[1741]: time="2024-02-09T19:15:35.146368379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:35.147611 env[1741]: time="2024-02-09T19:15:35.147522936Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1db3f3c50cc61443f1d285fac221558846ea6f5ddf40dd6ac2b400273a26a4ed pid=2541 runtime=io.containerd.runc.v2 Feb 9 19:15:35.205067 env[1741]: time="2024-02-09T19:15:35.193920569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:35.205067 env[1741]: time="2024-02-09T19:15:35.194001205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:35.205067 env[1741]: time="2024-02-09T19:15:35.194030457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:35.205067 env[1741]: time="2024-02-09T19:15:35.194307599Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cc63bb1241fefbe124f8d05f979ffe0823f660be400f5a33f4e5fa33e4e8c41 pid=2565 runtime=io.containerd.runc.v2 Feb 9 19:15:35.250595 env[1741]: time="2024-02-09T19:15:35.249305663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:35.250595 env[1741]: time="2024-02-09T19:15:35.249377137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:35.250595 env[1741]: time="2024-02-09T19:15:35.249403375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:35.251309 env[1741]: time="2024-02-09T19:15:35.251194748Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9ae943a875e4e040fc916ab27714a90f384e819b7e4b451eafe981132ac1ff0 pid=2599 runtime=io.containerd.runc.v2 Feb 9 19:15:35.304296 kubelet[2466]: E0209 19:15:35.303984 2466 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.18.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-155?timeout=10s": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:35.332223 env[1741]: time="2024-02-09T19:15:35.332006739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-155,Uid:16513a0c7cc084f4aba820843e69ee6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1db3f3c50cc61443f1d285fac221558846ea6f5ddf40dd6ac2b400273a26a4ed\"" Feb 9 19:15:35.342209 env[1741]: time="2024-02-09T19:15:35.341663499Z" level=info msg="CreateContainer within sandbox \"1db3f3c50cc61443f1d285fac221558846ea6f5ddf40dd6ac2b400273a26a4ed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:15:35.369095 env[1741]: time="2024-02-09T19:15:35.369033483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-155,Uid:bcef2d3a710d12e2de6bee96cf380678,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cc63bb1241fefbe124f8d05f979ffe0823f660be400f5a33f4e5fa33e4e8c41\"" Feb 9 19:15:35.382071 env[1741]: time="2024-02-09T19:15:35.381991200Z" level=info msg="CreateContainer within sandbox \"7cc63bb1241fefbe124f8d05f979ffe0823f660be400f5a33f4e5fa33e4e8c41\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:15:35.388371 env[1741]: time="2024-02-09T19:15:35.388301239Z" level=info msg="CreateContainer within sandbox \"1db3f3c50cc61443f1d285fac221558846ea6f5ddf40dd6ac2b400273a26a4ed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6b29eecd84434a066102df6c56af9fa7fdeae6f07116d154f8c249299cedc1b8\"" Feb 9 19:15:35.389388 env[1741]: time="2024-02-09T19:15:35.389342581Z" level=info msg="StartContainer for \"6b29eecd84434a066102df6c56af9fa7fdeae6f07116d154f8c249299cedc1b8\"" Feb 9 19:15:35.404427 env[1741]: time="2024-02-09T19:15:35.404371009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-155,Uid:307e543254f557e023dcc800b26e5112,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9ae943a875e4e040fc916ab27714a90f384e819b7e4b451eafe981132ac1ff0\"" Feb 9 19:15:35.408826 env[1741]: time="2024-02-09T19:15:35.408725065Z" level=info msg="CreateContainer within sandbox \"7cc63bb1241fefbe124f8d05f979ffe0823f660be400f5a33f4e5fa33e4e8c41\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9152f64bd580d4e4520bbc286aebf7b8bed65759441905a3ca8d133d131009d5\"" Feb 9 19:15:35.409511 env[1741]: time="2024-02-09T19:15:35.409447704Z" level=info msg="CreateContainer within sandbox \"b9ae943a875e4e040fc916ab27714a90f384e819b7e4b451eafe981132ac1ff0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:15:35.410125 env[1741]: time="2024-02-09T19:15:35.410072272Z" level=info msg="StartContainer for \"9152f64bd580d4e4520bbc286aebf7b8bed65759441905a3ca8d133d131009d5\"" Feb 9 19:15:35.420899 kubelet[2466]: I0209 19:15:35.420842 2466 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-18-155" Feb 9 19:15:35.421352 kubelet[2466]: E0209 19:15:35.421315 2466 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.18.155:6443/api/v1/nodes\": dial tcp 172.31.18.155:6443: connect: connection refused" node="ip-172-31-18-155" Feb 9 19:15:35.441238 env[1741]: time="2024-02-09T19:15:35.441157286Z" level=info msg="CreateContainer within sandbox \"b9ae943a875e4e040fc916ab27714a90f384e819b7e4b451eafe981132ac1ff0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6e3898d8dde7d718f1d338c7310402e68581c3a0c2fa4fc40d986b350a46d58e\"" Feb 9 19:15:35.442237 env[1741]: time="2024-02-09T19:15:35.442189597Z" level=info msg="StartContainer for \"6e3898d8dde7d718f1d338c7310402e68581c3a0c2fa4fc40d986b350a46d58e\"" Feb 9 19:15:35.568541 kubelet[2466]: W0209 19:15:35.568385 2466 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.18.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:35.568541 kubelet[2466]: E0209 19:15:35.568491 2466 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.155:6443: connect: connection refused Feb 9 19:15:35.652180 env[1741]: time="2024-02-09T19:15:35.652038022Z" level=info msg="StartContainer for \"6b29eecd84434a066102df6c56af9fa7fdeae6f07116d154f8c249299cedc1b8\" returns successfully" Feb 9 19:15:35.663485 env[1741]: time="2024-02-09T19:15:35.663423135Z" level=info msg="StartContainer for \"9152f64bd580d4e4520bbc286aebf7b8bed65759441905a3ca8d133d131009d5\" returns successfully" Feb 9 19:15:35.683349 env[1741]: time="2024-02-09T19:15:35.683285137Z" level=info msg="StartContainer for \"6e3898d8dde7d718f1d338c7310402e68581c3a0c2fa4fc40d986b350a46d58e\" returns successfully" Feb 9 19:15:37.023579 kubelet[2466]: I0209 19:15:37.023524 2466 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-18-155" Feb 9 19:15:41.122596 kubelet[2466]: E0209 19:15:41.122559 2466 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-155\" not found" node="ip-172-31-18-155" Feb 9 19:15:41.183998 kubelet[2466]: I0209 19:15:41.183944 2466 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-18-155" Feb 9 19:15:41.473865 update_engine[1734]: I0209 19:15:41.473812 1734 update_attempter.cc:509] Updating boot flags... Feb 9 19:15:41.885319 kubelet[2466]: I0209 19:15:41.885036 2466 apiserver.go:52] "Watching apiserver" Feb 9 19:15:41.901970 kubelet[2466]: I0209 19:15:41.901917 2466 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:15:41.954007 kubelet[2466]: I0209 19:15:41.953957 2466 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:15:43.807784 systemd[1]: Reloading. Feb 9 19:15:43.951252 /usr/lib/systemd/system-generators/torcx-generator[2975]: time="2024-02-09T19:15:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:15:43.951317 /usr/lib/systemd/system-generators/torcx-generator[2975]: time="2024-02-09T19:15:43Z" level=info msg="torcx already run" Feb 9 19:15:44.098060 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:15:44.098339 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:15:44.146611 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:15:44.375692 kubelet[2466]: I0209 19:15:44.375395 2466 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:15:44.376016 systemd[1]: Stopping kubelet.service... Feb 9 19:15:44.399251 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:15:44.399982 systemd[1]: Stopped kubelet.service. Feb 9 19:15:44.411064 kernel: kauditd_printk_skb: 104 callbacks suppressed Feb 9 19:15:44.411219 kernel: audit: type=1131 audit(1707506144.398:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:44.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:44.404241 systemd[1]: Started kubelet.service. Feb 9 19:15:44.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:44.432269 kernel: audit: type=1130 audit(1707506144.401:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:44.561011 kubelet[3035]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:15:44.561594 kubelet[3035]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:44.561991 kubelet[3035]: I0209 19:15:44.561928 3035 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:15:44.568301 kubelet[3035]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:15:44.568501 kubelet[3035]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:44.578145 kubelet[3035]: I0209 19:15:44.578100 3035 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:15:44.578398 kubelet[3035]: I0209 19:15:44.578376 3035 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:15:44.578958 kubelet[3035]: I0209 19:15:44.578926 3035 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:15:44.584334 kubelet[3035]: I0209 19:15:44.584258 3035 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:15:44.586518 kubelet[3035]: I0209 19:15:44.586465 3035 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:15:44.590442 kubelet[3035]: W0209 19:15:44.590409 3035 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 19:15:44.592123 kubelet[3035]: I0209 19:15:44.592087 3035 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:15:44.593225 kubelet[3035]: I0209 19:15:44.593196 3035 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:15:44.593464 kubelet[3035]: I0209 19:15:44.593440 3035 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:15:44.593704 kubelet[3035]: I0209 19:15:44.593681 3035 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:15:44.593882 kubelet[3035]: I0209 19:15:44.593863 3035 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:15:44.594046 kubelet[3035]: I0209 19:15:44.594023 3035 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:44.600673 kubelet[3035]: I0209 19:15:44.600639 3035 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:15:44.600913 kubelet[3035]: I0209 19:15:44.600888 3035 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:15:44.601066 kubelet[3035]: I0209 19:15:44.601046 3035 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:15:44.601182 kubelet[3035]: I0209 19:15:44.601161 3035 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:15:44.603162 kubelet[3035]: I0209 19:15:44.603114 3035 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:15:44.604126 kubelet[3035]: I0209 19:15:44.604083 3035 server.go:1186] "Started kubelet" Feb 9 19:15:44.639000 audit[3035]: AVC avc: denied { mac_admin } for pid=3035 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:15:44.648191 kubelet[3035]: I0209 19:15:44.648156 3035 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:15:44.648374 kubelet[3035]: I0209 19:15:44.648353 3035 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:15:44.648521 kubelet[3035]: I0209 19:15:44.648500 3035 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:15:44.653848 kernel: audit: type=1400 audit(1707506144.639:224): avc: denied { mac_admin } for pid=3035 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:15:44.653965 kernel: audit: type=1401 audit(1707506144.639:224): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:15:44.639000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:15:44.655978 kubelet[3035]: I0209 19:15:44.655945 3035 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:15:44.669128 kernel: audit: type=1300 audit(1707506144.639:224): arch=c00000b7 syscall=5 success=no exit=-22 a0=40009a7140 a1=4000e60618 a2=40009a7110 a3=25 items=0 ppid=1 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:44.639000 audit[3035]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40009a7140 a1=4000e60618 a2=40009a7110 a3=25 items=0 ppid=1 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:44.671936 kubelet[3035]: I0209 19:15:44.671879 3035 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:15:44.675363 kubelet[3035]: E0209 19:15:44.675279 3035 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:15:44.675536 kubelet[3035]: E0209 19:15:44.675375 3035 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:15:44.678061 kubelet[3035]: I0209 19:15:44.678013 3035 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:15:44.678402 kubelet[3035]: I0209 19:15:44.678357 3035 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:15:44.639000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:15:44.697678 kernel: audit: type=1327 audit(1707506144.639:224): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:15:44.647000 audit[3035]: AVC avc: denied { mac_admin } for pid=3035 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:15:44.715492 kernel: audit: type=1400 audit(1707506144.647:225): avc: denied { mac_admin } for pid=3035 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:15:44.728828 kernel: audit: type=1401 audit(1707506144.647:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:15:44.728920 kernel: audit: type=1300 audit(1707506144.647:225): arch=c00000b7 syscall=5 success=no exit=-22 a0=40009daa80 a1=4000e60630 a2=40009a71d0 a3=25 items=0 ppid=1 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:44.647000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:15:44.647000 audit[3035]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40009daa80 a1=4000e60630 a2=40009a71d0 a3=25 items=0 ppid=1 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:44.647000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:15:44.747912 kernel: audit: type=1327 audit(1707506144.647:225): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:15:44.794479 kubelet[3035]: I0209 19:15:44.794429 3035 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-18-155" Feb 9 19:15:44.823816 kubelet[3035]: I0209 19:15:44.821378 3035 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-18-155" Feb 9 19:15:44.823816 kubelet[3035]: I0209 19:15:44.821509 3035 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-18-155" Feb 9 19:15:44.853977 kubelet[3035]: I0209 19:15:44.853946 3035 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:15:45.062079 kubelet[3035]: I0209 19:15:45.062024 3035 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:15:45.062079 kubelet[3035]: I0209 19:15:45.062069 3035 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:15:45.062328 kubelet[3035]: I0209 19:15:45.062108 3035 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:15:45.062328 kubelet[3035]: E0209 19:15:45.062206 3035 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:15:45.085780 kubelet[3035]: I0209 19:15:45.084787 3035 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:15:45.085780 kubelet[3035]: I0209 19:15:45.084828 3035 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:15:45.085780 kubelet[3035]: I0209 19:15:45.084885 3035 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:45.085780 kubelet[3035]: I0209 19:15:45.085179 3035 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:15:45.085780 kubelet[3035]: I0209 19:15:45.085228 3035 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:15:45.085780 kubelet[3035]: I0209 19:15:45.085244 3035 policy_none.go:49] "None policy: Start" Feb 9 19:15:45.092357 kubelet[3035]: I0209 19:15:45.092287 3035 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:15:45.092357 kubelet[3035]: I0209 19:15:45.092356 3035 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:15:45.092775 kubelet[3035]: I0209 19:15:45.092727 3035 state_mem.go:75] "Updated machine memory state" Feb 9 19:15:45.114983 kubelet[3035]: I0209 19:15:45.114927 3035 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:15:45.114000 audit[3035]: AVC avc: denied { mac_admin } for pid=3035 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:15:45.114000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:15:45.114000 audit[3035]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4001707740 a1=40013e6e70 a2=4001707710 a3=25 items=0 ppid=1 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:45.114000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:15:45.118264 kubelet[3035]: I0209 19:15:45.118227 3035 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:15:45.125590 kubelet[3035]: I0209 19:15:45.124962 3035 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:15:45.162603 kubelet[3035]: I0209 19:15:45.162558 3035 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:45.162943 kubelet[3035]: I0209 19:15:45.162915 3035 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:45.163109 kubelet[3035]: I0209 19:15:45.163088 3035 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:45.180132 kubelet[3035]: E0209 19:15:45.175431 3035 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-18-155\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-155" Feb 9 19:15:45.204799 kubelet[3035]: I0209 19:15:45.204736 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16513a0c7cc084f4aba820843e69ee6a-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-155\" (UID: \"16513a0c7cc084f4aba820843e69ee6a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-155" Feb 9 19:15:45.205133 kubelet[3035]: I0209 19:15:45.205110 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/16513a0c7cc084f4aba820843e69ee6a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-155\" (UID: \"16513a0c7cc084f4aba820843e69ee6a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-155" Feb 9 19:15:45.205296 kubelet[3035]: I0209 19:15:45.205276 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bcef2d3a710d12e2de6bee96cf380678-ca-certs\") pod \"kube-apiserver-ip-172-31-18-155\" (UID: \"bcef2d3a710d12e2de6bee96cf380678\") " pod="kube-system/kube-apiserver-ip-172-31-18-155" Feb 9 19:15:45.205471 kubelet[3035]: I0209 19:15:45.205437 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bcef2d3a710d12e2de6bee96cf380678-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-155\" (UID: \"bcef2d3a710d12e2de6bee96cf380678\") " pod="kube-system/kube-apiserver-ip-172-31-18-155" Feb 9 19:15:45.205618 kubelet[3035]: I0209 19:15:45.205597 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bcef2d3a710d12e2de6bee96cf380678-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-155\" (UID: \"bcef2d3a710d12e2de6bee96cf380678\") " pod="kube-system/kube-apiserver-ip-172-31-18-155" Feb 9 19:15:45.205857 kubelet[3035]: I0209 19:15:45.205836 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16513a0c7cc084f4aba820843e69ee6a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-155\" (UID: \"16513a0c7cc084f4aba820843e69ee6a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-155" Feb 9 19:15:45.206020 kubelet[3035]: I0209 19:15:45.206000 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/16513a0c7cc084f4aba820843e69ee6a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-155\" (UID: \"16513a0c7cc084f4aba820843e69ee6a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-155" Feb 9 19:15:45.206372 kubelet[3035]: I0209 19:15:45.206265 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16513a0c7cc084f4aba820843e69ee6a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-155\" (UID: \"16513a0c7cc084f4aba820843e69ee6a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-155" Feb 9 19:15:45.206544 kubelet[3035]: I0209 19:15:45.206514 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/307e543254f557e023dcc800b26e5112-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-155\" (UID: \"307e543254f557e023dcc800b26e5112\") " pod="kube-system/kube-scheduler-ip-172-31-18-155" Feb 9 19:15:45.611359 kubelet[3035]: I0209 19:15:45.611292 3035 apiserver.go:52] "Watching apiserver" Feb 9 19:15:45.679269 kubelet[3035]: I0209 19:15:45.679200 3035 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:15:45.711615 kubelet[3035]: I0209 19:15:45.711545 3035 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:15:46.140324 kubelet[3035]: E0209 19:15:46.140286 3035 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-18-155\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-155" Feb 9 19:15:46.418534 kubelet[3035]: I0209 19:15:46.418384 3035 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-155" podStartSLOduration=1.418289401 pod.CreationTimestamp="2024-02-09 19:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:15:46.41781233 +0000 UTC m=+2.001586516" watchObservedRunningTime="2024-02-09 19:15:46.418289401 +0000 UTC m=+2.002063575" Feb 9 19:15:47.259407 kubelet[3035]: I0209 19:15:47.259360 3035 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-155" podStartSLOduration=2.259306696 pod.CreationTimestamp="2024-02-09 19:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:15:46.848705098 +0000 UTC m=+2.432479380" watchObservedRunningTime="2024-02-09 19:15:47.259306696 +0000 UTC m=+2.843080870" Feb 9 19:15:51.553483 sudo[2043]: pam_unix(sudo:session): session closed for user root Feb 9 19:15:51.553000 audit[2043]: USER_END pid=2043 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:15:51.556131 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 9 19:15:51.556240 kernel: audit: type=1106 audit(1707506151.553:227): pid=2043 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:15:51.553000 audit[2043]: CRED_DISP pid=2043 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:15:51.573310 kernel: audit: type=1104 audit(1707506151.553:228): pid=2043 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:15:51.579125 sshd[2039]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:51.580000 audit[2039]: USER_END pid=2039 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:15:51.580000 audit[2039]: CRED_DISP pid=2039 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:15:51.604782 kernel: audit: type=1106 audit(1707506151.580:229): pid=2039 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:15:51.604947 kernel: audit: type=1104 audit(1707506151.580:230): pid=2039 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:15:51.603577 systemd[1]: sshd@6-172.31.18.155:22-147.75.109.163:49408.service: Deactivated successfully. Feb 9 19:15:51.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.155:22-147.75.109.163:49408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:51.614465 kernel: audit: type=1131 audit(1707506151.603:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.155:22-147.75.109.163:49408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:51.615163 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:15:51.615299 systemd-logind[1733]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:15:51.617647 systemd-logind[1733]: Removed session 7. Feb 9 19:15:57.038734 kubelet[3035]: I0209 19:15:57.038702 3035 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:15:57.040272 env[1741]: time="2024-02-09T19:15:57.040218263Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:15:57.041421 kubelet[3035]: I0209 19:15:57.041390 3035 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:15:57.659298 kubelet[3035]: I0209 19:15:57.659248 3035 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:57.693366 kubelet[3035]: W0209 19:15:57.693305 3035 reflector.go:424] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-18-155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-155' and this object Feb 9 19:15:57.693617 kubelet[3035]: E0209 19:15:57.693591 3035 reflector.go:140] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-18-155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-155' and this object Feb 9 19:15:57.694138 kubelet[3035]: W0209 19:15:57.694107 3035 reflector.go:424] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-18-155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-155' and this object Feb 9 19:15:57.694344 kubelet[3035]: E0209 19:15:57.694319 3035 reflector.go:140] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-18-155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-155' and this object Feb 9 19:15:57.789863 kubelet[3035]: I0209 19:15:57.789806 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c44f2177-eb7a-4599-85b8-03666d1e7e3b-kube-proxy\") pod \"kube-proxy-dgvwd\" (UID: \"c44f2177-eb7a-4599-85b8-03666d1e7e3b\") " pod="kube-system/kube-proxy-dgvwd" Feb 9 19:15:57.790023 kubelet[3035]: I0209 19:15:57.789902 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c44f2177-eb7a-4599-85b8-03666d1e7e3b-xtables-lock\") pod \"kube-proxy-dgvwd\" (UID: \"c44f2177-eb7a-4599-85b8-03666d1e7e3b\") " pod="kube-system/kube-proxy-dgvwd" Feb 9 19:15:57.790023 kubelet[3035]: I0209 19:15:57.789980 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c44f2177-eb7a-4599-85b8-03666d1e7e3b-lib-modules\") pod \"kube-proxy-dgvwd\" (UID: \"c44f2177-eb7a-4599-85b8-03666d1e7e3b\") " pod="kube-system/kube-proxy-dgvwd" Feb 9 19:15:57.790190 kubelet[3035]: I0209 19:15:57.790051 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrm28\" (UniqueName: \"kubernetes.io/projected/c44f2177-eb7a-4599-85b8-03666d1e7e3b-kube-api-access-rrm28\") pod \"kube-proxy-dgvwd\" (UID: \"c44f2177-eb7a-4599-85b8-03666d1e7e3b\") " pod="kube-system/kube-proxy-dgvwd" Feb 9 19:15:58.008097 kubelet[3035]: I0209 19:15:58.008027 3035 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:58.091986 kubelet[3035]: I0209 19:15:58.091871 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/68a64c29-c6be-45c7-afbf-291b972cc3d0-var-lib-calico\") pod \"tigera-operator-cfc98749c-j29jj\" (UID: \"68a64c29-c6be-45c7-afbf-291b972cc3d0\") " pod="tigera-operator/tigera-operator-cfc98749c-j29jj" Feb 9 19:15:58.092589 kubelet[3035]: I0209 19:15:58.092026 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ztwb\" (UniqueName: \"kubernetes.io/projected/68a64c29-c6be-45c7-afbf-291b972cc3d0-kube-api-access-5ztwb\") pod \"tigera-operator-cfc98749c-j29jj\" (UID: \"68a64c29-c6be-45c7-afbf-291b972cc3d0\") " pod="tigera-operator/tigera-operator-cfc98749c-j29jj" Feb 9 19:15:58.316945 env[1741]: time="2024-02-09T19:15:58.316710000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-j29jj,Uid:68a64c29-c6be-45c7-afbf-291b972cc3d0,Namespace:tigera-operator,Attempt:0,}" Feb 9 19:15:58.349824 env[1741]: time="2024-02-09T19:15:58.349664823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:58.350100 env[1741]: time="2024-02-09T19:15:58.350053214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:58.350262 env[1741]: time="2024-02-09T19:15:58.350217199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:58.350986 env[1741]: time="2024-02-09T19:15:58.350899111Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/257d174a1e37e5374d37ce0a2b4d586ab6eceb80c26a14f7e1b79d5f170bbb13 pid=3142 runtime=io.containerd.runc.v2 Feb 9 19:15:58.457949 env[1741]: time="2024-02-09T19:15:58.457890820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-j29jj,Uid:68a64c29-c6be-45c7-afbf-291b972cc3d0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"257d174a1e37e5374d37ce0a2b4d586ab6eceb80c26a14f7e1b79d5f170bbb13\"" Feb 9 19:15:58.463223 env[1741]: time="2024-02-09T19:15:58.463171534Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 9 19:15:58.904125 kubelet[3035]: E0209 19:15:58.904067 3035 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:15:58.904324 kubelet[3035]: E0209 19:15:58.904146 3035 projected.go:198] Error preparing data for projected volume kube-api-access-rrm28 for pod kube-system/kube-proxy-dgvwd: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:15:58.904324 kubelet[3035]: E0209 19:15:58.904247 3035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c44f2177-eb7a-4599-85b8-03666d1e7e3b-kube-api-access-rrm28 podName:c44f2177-eb7a-4599-85b8-03666d1e7e3b nodeName:}" failed. No retries permitted until 2024-02-09 19:15:59.404214778 +0000 UTC m=+14.987988952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rrm28" (UniqueName: "kubernetes.io/projected/c44f2177-eb7a-4599-85b8-03666d1e7e3b-kube-api-access-rrm28") pod "kube-proxy-dgvwd" (UID: "c44f2177-eb7a-4599-85b8-03666d1e7e3b") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:15:59.765670 env[1741]: time="2024-02-09T19:15:59.765395686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dgvwd,Uid:c44f2177-eb7a-4599-85b8-03666d1e7e3b,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:59.824060 env[1741]: time="2024-02-09T19:15:59.823926245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:59.824060 env[1741]: time="2024-02-09T19:15:59.824010109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:59.824431 env[1741]: time="2024-02-09T19:15:59.824353079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:59.824866 env[1741]: time="2024-02-09T19:15:59.824789683Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/56d3a8c3eea9c800c9532a16ebc4c55bf11bff0d5190bf1cc70c71db58cd704c pid=3185 runtime=io.containerd.runc.v2 Feb 9 19:15:59.956634 env[1741]: time="2024-02-09T19:15:59.954548375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dgvwd,Uid:c44f2177-eb7a-4599-85b8-03666d1e7e3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"56d3a8c3eea9c800c9532a16ebc4c55bf11bff0d5190bf1cc70c71db58cd704c\"" Feb 9 19:15:59.968208 env[1741]: time="2024-02-09T19:15:59.968140511Z" level=info msg="CreateContainer within sandbox \"56d3a8c3eea9c800c9532a16ebc4c55bf11bff0d5190bf1cc70c71db58cd704c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:16:00.002805 env[1741]: time="2024-02-09T19:16:00.002706390Z" level=info msg="CreateContainer within sandbox \"56d3a8c3eea9c800c9532a16ebc4c55bf11bff0d5190bf1cc70c71db58cd704c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2e50e380a849fbb488a146879d332c2a63a2cab0c78ac73efb8bff022ef2f0e1\"" Feb 9 19:16:00.004221 env[1741]: time="2024-02-09T19:16:00.004157441Z" level=info msg="StartContainer for \"2e50e380a849fbb488a146879d332c2a63a2cab0c78ac73efb8bff022ef2f0e1\"" Feb 9 19:16:00.155552 env[1741]: time="2024-02-09T19:16:00.154027722Z" level=info msg="StartContainer for \"2e50e380a849fbb488a146879d332c2a63a2cab0c78ac73efb8bff022ef2f0e1\" returns successfully" Feb 9 19:16:00.347000 audit[3275]: NETFILTER_CFG table=mangle:59 family=2 entries=1 op=nft_register_chain pid=3275 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.362644 kernel: audit: type=1325 audit(1707506160.347:232): table=mangle:59 family=2 entries=1 op=nft_register_chain pid=3275 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.362725 kernel: audit: type=1300 audit(1707506160.347:232): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc2143b60 a2=0 a3=ffff8c7566c0 items=0 ppid=3237 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.347000 audit[3275]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc2143b60 a2=0 a3=ffff8c7566c0 items=0 ppid=3237 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.347000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:16:00.372174 kernel: audit: type=1327 audit(1707506160.347:232): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:16:00.372000 audit[3276]: NETFILTER_CFG table=mangle:60 family=10 entries=1 op=nft_register_chain pid=3276 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.379482 kernel: audit: type=1325 audit(1707506160.372:233): table=mangle:60 family=10 entries=1 op=nft_register_chain pid=3276 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.379618 kernel: audit: type=1300 audit(1707506160.372:233): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb379fe0 a2=0 a3=ffffb9e976c0 items=0 ppid=3237 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.372000 audit[3276]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb379fe0 a2=0 a3=ffffb9e976c0 items=0 ppid=3237 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.372000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:16:00.392000 audit[3277]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3277 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.408901 kernel: audit: type=1327 audit(1707506160.372:233): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:16:00.409122 kernel: audit: type=1325 audit(1707506160.392:234): table=nat:61 family=2 entries=1 op=nft_register_chain pid=3277 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.409198 kernel: audit: type=1300 audit(1707506160.392:234): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed4e8bf0 a2=0 a3=ffffb1e2e6c0 items=0 ppid=3237 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.392000 audit[3277]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed4e8bf0 a2=0 a3=ffffb1e2e6c0 items=0 ppid=3237 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:16:00.426198 kernel: audit: type=1327 audit(1707506160.392:234): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:16:00.426324 kernel: audit: type=1325 audit(1707506160.393:235): table=nat:62 family=10 entries=1 op=nft_register_chain pid=3278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.393000 audit[3278]: NETFILTER_CFG table=nat:62 family=10 entries=1 op=nft_register_chain pid=3278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.393000 audit[3278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7ab3550 a2=0 a3=ffff99fae6c0 items=0 ppid=3237 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.393000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:16:00.402000 audit[3279]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3279 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.402000 audit[3279]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc9dd6f00 a2=0 a3=ffffbe2c86c0 items=0 ppid=3237 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:16:00.402000 audit[3280]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_chain pid=3280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.402000 audit[3280]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe053d7e0 a2=0 a3=ffff818f56c0 items=0 ppid=3237 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.402000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:16:00.449000 audit[3281]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3281 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.449000 audit[3281]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffed91b970 a2=0 a3=ffffb72ed6c0 items=0 ppid=3237 pid=3281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.449000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:16:00.455000 audit[3283]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3283 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.455000 audit[3283]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc3346e90 a2=0 a3=ffff927a56c0 items=0 ppid=3237 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.455000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 19:16:00.463000 audit[3286]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.463000 audit[3286]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff2ec5fe0 a2=0 a3=ffffbf6bb6c0 items=0 ppid=3237 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.463000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 19:16:00.466000 audit[3287]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3287 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.466000 audit[3287]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2168640 a2=0 a3=ffffbd6136c0 items=0 ppid=3237 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.466000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:16:00.472000 audit[3289]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3289 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.472000 audit[3289]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc10a0290 a2=0 a3=ffff99dfe6c0 items=0 ppid=3237 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.472000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:16:00.475000 audit[3290]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.475000 audit[3290]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1f5f4a0 a2=0 a3=ffffbe2dd6c0 items=0 ppid=3237 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.475000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:16:00.481000 audit[3292]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.481000 audit[3292]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff7a43ed0 a2=0 a3=ffff80fc46c0 items=0 ppid=3237 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.481000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:16:00.489000 audit[3295]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3295 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.489000 audit[3295]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff8e4f330 a2=0 a3=ffff930106c0 items=0 ppid=3237 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.489000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 19:16:00.492000 audit[3296]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.492000 audit[3296]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcedb7460 a2=0 a3=ffffb92076c0 items=0 ppid=3237 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.492000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:16:00.497000 audit[3298]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3298 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.497000 audit[3298]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdca27cf0 a2=0 a3=ffff881ae6c0 items=0 ppid=3237 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:16:00.500000 audit[3299]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_chain pid=3299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.500000 audit[3299]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffcfbd5c0 a2=0 a3=ffffae4f06c0 items=0 ppid=3237 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.500000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:16:00.507000 audit[3301]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3301 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.507000 audit[3301]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffce26a6a0 a2=0 a3=ffff93b926c0 items=0 ppid=3237 pid=3301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:16:00.510938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2250144565.mount: Deactivated successfully. Feb 9 19:16:00.520000 audit[3304]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=3304 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.520000 audit[3304]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc9597710 a2=0 a3=ffff86fa36c0 items=0 ppid=3237 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.520000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:16:00.528000 audit[3307]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=3307 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.528000 audit[3307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc58e1260 a2=0 a3=ffff8142d6c0 items=0 ppid=3237 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.528000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:16:00.531000 audit[3308]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_chain pid=3308 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.531000 audit[3308]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc8f4cb70 a2=0 a3=ffff820fc6c0 items=0 ppid=3237 pid=3308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.531000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:16:00.537000 audit[3310]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_rule pid=3310 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.537000 audit[3310]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffede4a5e0 a2=0 a3=ffffa6a936c0 items=0 ppid=3237 pid=3310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.537000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:16:00.544000 audit[3313]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3313 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:16:00.544000 audit[3313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff2c0f1e0 a2=0 a3=ffffb68476c0 items=0 ppid=3237 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.544000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:16:00.563000 audit[3317]: NETFILTER_CFG table=filter:82 family=2 entries=6 op=nft_register_rule pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:00.563000 audit[3317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffd07d6a20 a2=0 a3=ffffbd6bb6c0 items=0 ppid=3237 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.563000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:00.576000 audit[3317]: NETFILTER_CFG table=nat:83 family=2 entries=17 op=nft_register_chain pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:00.576000 audit[3317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd07d6a20 a2=0 a3=ffffbd6bb6c0 items=0 ppid=3237 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.576000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:00.583000 audit[3322]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.583000 audit[3322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdc822890 a2=0 a3=ffff9febc6c0 items=0 ppid=3237 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:16:00.590000 audit[3324]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3324 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.590000 audit[3324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcb974440 a2=0 a3=ffffa7d646c0 items=0 ppid=3237 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.590000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 19:16:00.598000 audit[3327]: NETFILTER_CFG table=filter:86 family=10 entries=2 op=nft_register_chain pid=3327 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.598000 audit[3327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff2df3dd0 a2=0 a3=ffffa1b466c0 items=0 ppid=3237 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.598000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 19:16:00.600000 audit[3328]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3328 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.600000 audit[3328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffedf632a0 a2=0 a3=ffff8a91b6c0 items=0 ppid=3237 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:16:00.607000 audit[3330]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.607000 audit[3330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcdc52e60 a2=0 a3=ffff8a9296c0 items=0 ppid=3237 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.607000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:16:00.609000 audit[3331]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3331 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.609000 audit[3331]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffae2ac90 a2=0 a3=ffff918326c0 items=0 ppid=3237 pid=3331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.609000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:16:00.617000 audit[3333]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3333 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.617000 audit[3333]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff99b1a20 a2=0 a3=ffffa3f0b6c0 items=0 ppid=3237 pid=3333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.617000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 19:16:00.627000 audit[3336]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3336 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.627000 audit[3336]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffec76dc00 a2=0 a3=ffff95b3d6c0 items=0 ppid=3237 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.627000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:16:00.630000 audit[3337]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3337 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.630000 audit[3337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2982d40 a2=0 a3=ffffa622c6c0 items=0 ppid=3237 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.630000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:16:00.636000 audit[3339]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3339 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.636000 audit[3339]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffd3f6260 a2=0 a3=ffff9ce2a6c0 items=0 ppid=3237 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.636000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:16:00.639000 audit[3340]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3340 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.639000 audit[3340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee626e30 a2=0 a3=ffffae2176c0 items=0 ppid=3237 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.639000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:16:00.646000 audit[3342]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3342 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.646000 audit[3342]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffc371eb0 a2=0 a3=ffff812476c0 items=0 ppid=3237 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.646000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:16:00.655000 audit[3345]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3345 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.655000 audit[3345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc01d08c0 a2=0 a3=ffff9aeca6c0 items=0 ppid=3237 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.655000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:16:00.665000 audit[3348]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3348 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.665000 audit[3348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc2769fd0 a2=0 a3=ffff7fa476c0 items=0 ppid=3237 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.665000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 19:16:00.674000 audit[3349]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3349 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.674000 audit[3349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcc407260 a2=0 a3=ffffb6a596c0 items=0 ppid=3237 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.674000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:16:00.680000 audit[3351]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3351 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.680000 audit[3351]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd38cff50 a2=0 a3=ffffb09ef6c0 items=0 ppid=3237 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.680000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:16:00.690000 audit[3354]: NETFILTER_CFG table=nat:100 family=10 entries=2 op=nft_register_chain pid=3354 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:16:00.690000 audit[3354]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffed94ee50 a2=0 a3=ffffb7bd66c0 items=0 ppid=3237 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.690000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:16:00.708000 audit[3358]: NETFILTER_CFG table=filter:101 family=10 entries=3 op=nft_register_rule pid=3358 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:16:00.708000 audit[3358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffe354ff20 a2=0 a3=ffff9a6eb6c0 items=0 ppid=3237 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.708000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:00.709000 audit[3358]: NETFILTER_CFG table=nat:102 family=10 entries=10 op=nft_register_chain pid=3358 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:16:00.709000 audit[3358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffe354ff20 a2=0 a3=ffff9a6eb6c0 items=0 ppid=3237 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:00.709000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:01.238395 env[1741]: time="2024-02-09T19:16:01.238316088Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:01.246982 env[1741]: time="2024-02-09T19:16:01.246925815Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:01.249882 env[1741]: time="2024-02-09T19:16:01.249833980Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:01.254217 env[1741]: time="2024-02-09T19:16:01.254167224Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:01.255910 env[1741]: time="2024-02-09T19:16:01.255852880Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f\"" Feb 9 19:16:01.263107 env[1741]: time="2024-02-09T19:16:01.262841321Z" level=info msg="CreateContainer within sandbox \"257d174a1e37e5374d37ce0a2b4d586ab6eceb80c26a14f7e1b79d5f170bbb13\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 9 19:16:01.283607 env[1741]: time="2024-02-09T19:16:01.281403815Z" level=info msg="CreateContainer within sandbox \"257d174a1e37e5374d37ce0a2b4d586ab6eceb80c26a14f7e1b79d5f170bbb13\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fe5bb0711b2f4b043117738d0efddfbd9aa6e75597d99001350950f6f8cacf6f\"" Feb 9 19:16:01.284096 env[1741]: time="2024-02-09T19:16:01.284042485Z" level=info msg="StartContainer for \"fe5bb0711b2f4b043117738d0efddfbd9aa6e75597d99001350950f6f8cacf6f\"" Feb 9 19:16:01.331472 systemd[1]: run-containerd-runc-k8s.io-fe5bb0711b2f4b043117738d0efddfbd9aa6e75597d99001350950f6f8cacf6f-runc.mFmQMq.mount: Deactivated successfully. Feb 9 19:16:01.417100 env[1741]: time="2024-02-09T19:16:01.417027352Z" level=info msg="StartContainer for \"fe5bb0711b2f4b043117738d0efddfbd9aa6e75597d99001350950f6f8cacf6f\" returns successfully" Feb 9 19:16:02.186998 kubelet[3035]: I0209 19:16:02.186944 3035 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dgvwd" podStartSLOduration=5.185137075 pod.CreationTimestamp="2024-02-09 19:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:00.181479001 +0000 UTC m=+15.765253175" watchObservedRunningTime="2024-02-09 19:16:02.185137075 +0000 UTC m=+17.768911261" Feb 9 19:16:05.092254 kubelet[3035]: I0209 19:16:05.092193 3035 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-j29jj" podStartSLOduration=-9.223372028762642e+09 pod.CreationTimestamp="2024-02-09 19:15:57 +0000 UTC" firstStartedPulling="2024-02-09 19:15:58.459918912 +0000 UTC m=+14.043693074" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:02.188336572 +0000 UTC m=+17.772110770" watchObservedRunningTime="2024-02-09 19:16:05.092133113 +0000 UTC m=+20.675907299" Feb 9 19:16:05.286000 audit[3422]: NETFILTER_CFG table=filter:103 family=2 entries=13 op=nft_register_rule pid=3422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:05.286000 audit[3422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=fffff4b0f210 a2=0 a3=ffff850506c0 items=0 ppid=3237 pid=3422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:05.286000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:05.288000 audit[3422]: NETFILTER_CFG table=nat:104 family=2 entries=20 op=nft_register_rule pid=3422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:05.288000 audit[3422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=fffff4b0f210 a2=0 a3=ffff850506c0 items=0 ppid=3237 pid=3422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:05.288000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:05.335328 kubelet[3035]: I0209 19:16:05.335246 3035 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:16:05.451330 kubelet[3035]: I0209 19:16:05.451290 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvtdg\" (UniqueName: \"kubernetes.io/projected/ab876074-e9f4-4d2f-8f2e-9d4d50039cc4-kube-api-access-vvtdg\") pod \"calico-typha-56f767cfb9-qf64t\" (UID: \"ab876074-e9f4-4d2f-8f2e-9d4d50039cc4\") " pod="calico-system/calico-typha-56f767cfb9-qf64t" Feb 9 19:16:05.451664 kubelet[3035]: I0209 19:16:05.451638 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ab876074-e9f4-4d2f-8f2e-9d4d50039cc4-typha-certs\") pod \"calico-typha-56f767cfb9-qf64t\" (UID: \"ab876074-e9f4-4d2f-8f2e-9d4d50039cc4\") " pod="calico-system/calico-typha-56f767cfb9-qf64t" Feb 9 19:16:05.452041 kubelet[3035]: I0209 19:16:05.452013 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab876074-e9f4-4d2f-8f2e-9d4d50039cc4-tigera-ca-bundle\") pod \"calico-typha-56f767cfb9-qf64t\" (UID: \"ab876074-e9f4-4d2f-8f2e-9d4d50039cc4\") " pod="calico-system/calico-typha-56f767cfb9-qf64t" Feb 9 19:16:05.529518 kubelet[3035]: I0209 19:16:05.529473 3035 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:16:05.590762 kernel: kauditd_printk_skb: 128 callbacks suppressed Feb 9 19:16:05.590917 kernel: audit: type=1325 audit(1707506165.582:278): table=filter:105 family=2 entries=14 op=nft_register_rule pid=3449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:05.582000 audit[3449]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=3449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:05.582000 audit[3449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffd81fa650 a2=0 a3=ffffad8c66c0 items=0 ppid=3237 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:05.603594 kernel: audit: type=1300 audit(1707506165.582:278): arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffd81fa650 a2=0 a3=ffffad8c66c0 items=0 ppid=3237 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:05.603691 kernel: audit: type=1327 audit(1707506165.582:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:05.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:05.592000 audit[3449]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=3449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:05.592000 audit[3449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd81fa650 a2=0 a3=ffffad8c66c0 items=0 ppid=3237 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:05.634286 kernel: audit: type=1325 audit(1707506165.592:279): table=nat:106 family=2 entries=20 op=nft_register_rule pid=3449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:05.634421 kernel: audit: type=1300 audit(1707506165.592:279): arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd81fa650 a2=0 a3=ffffad8c66c0 items=0 ppid=3237 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:05.592000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:05.642666 kernel: audit: type=1327 audit(1707506165.592:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:05.654265 kubelet[3035]: I0209 19:16:05.654206 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9-tigera-ca-bundle\") pod \"calico-node-kkzpq\" (UID: \"9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9\") " pod="calico-system/calico-node-kkzpq" Feb 9 19:16:05.654478 kubelet[3035]: I0209 19:16:05.654287 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9-var-run-calico\") pod \"calico-node-kkzpq\" (UID: \"9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9\") " pod="calico-system/calico-node-kkzpq" Feb 9 19:16:05.654478 kubelet[3035]: I0209 19:16:05.654334 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9-var-lib-calico\") pod \"calico-node-kkzpq\" (UID: \"9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9\") " pod="calico-system/calico-node-kkzpq" Feb 9 19:16:05.654478 kubelet[3035]: I0209 19:16:05.654383 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9-cni-bin-dir\") pod \"calico-node-kkzpq\" (UID: \"9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9\") " pod="calico-system/calico-node-kkzpq" Feb 9 19:16:05.654478 kubelet[3035]: I0209 19:16:05.654432 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9-xtables-lock\") pod \"calico-node-kkzpq\" (UID: \"9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9\") " pod="calico-system/calico-node-kkzpq" Feb 9 19:16:05.654724 kubelet[3035]: I0209 19:16:05.654482 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9-flexvol-driver-host\") pod \"calico-node-kkzpq\" (UID: \"9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9\") " pod="calico-system/calico-node-kkzpq" Feb 9 19:16:05.654724 kubelet[3035]: I0209 19:16:05.654560 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9-lib-modules\") pod \"calico-node-kkzpq\" (UID: \"9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9\") " pod="calico-system/calico-node-kkzpq" Feb 9 19:16:05.654724 kubelet[3035]: I0209 19:16:05.654610 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9-cni-log-dir\") pod \"calico-node-kkzpq\" (UID: \"9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9\") " pod="calico-system/calico-node-kkzpq" Feb 9 19:16:05.654724 kubelet[3035]: I0209 19:16:05.654660 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9-policysync\") pod \"calico-node-kkzpq\" (UID: \"9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9\") " pod="calico-system/calico-node-kkzpq" Feb 9 19:16:05.654724 kubelet[3035]: I0209 19:16:05.654704 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9-node-certs\") pod \"calico-node-kkzpq\" (UID: \"9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9\") " pod="calico-system/calico-node-kkzpq" Feb 9 19:16:05.655037 kubelet[3035]: I0209 19:16:05.654748 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9-cni-net-dir\") pod \"calico-node-kkzpq\" (UID: \"9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9\") " pod="calico-system/calico-node-kkzpq" Feb 9 19:16:05.655037 kubelet[3035]: I0209 19:16:05.654821 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2wx5\" (UniqueName: \"kubernetes.io/projected/9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9-kube-api-access-s2wx5\") pod \"calico-node-kkzpq\" (UID: \"9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9\") " pod="calico-system/calico-node-kkzpq" Feb 9 19:16:05.667991 kubelet[3035]: I0209 19:16:05.667906 3035 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:16:05.668559 kubelet[3035]: E0209 19:16:05.668501 3035 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-szjdz" podUID=9d3adc59-2fa5-4081-acf7-fb99c5b37340 Feb 9 19:16:05.755645 kubelet[3035]: I0209 19:16:05.755491 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9d3adc59-2fa5-4081-acf7-fb99c5b37340-registration-dir\") pod \"csi-node-driver-szjdz\" (UID: \"9d3adc59-2fa5-4081-acf7-fb99c5b37340\") " pod="calico-system/csi-node-driver-szjdz" Feb 9 19:16:05.755645 kubelet[3035]: I0209 19:16:05.755647 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d3adc59-2fa5-4081-acf7-fb99c5b37340-kubelet-dir\") pod \"csi-node-driver-szjdz\" (UID: \"9d3adc59-2fa5-4081-acf7-fb99c5b37340\") " pod="calico-system/csi-node-driver-szjdz" Feb 9 19:16:05.755904 kubelet[3035]: I0209 19:16:05.755734 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9d3adc59-2fa5-4081-acf7-fb99c5b37340-varrun\") pod \"csi-node-driver-szjdz\" (UID: \"9d3adc59-2fa5-4081-acf7-fb99c5b37340\") " pod="calico-system/csi-node-driver-szjdz" Feb 9 19:16:05.755904 kubelet[3035]: I0209 19:16:05.755867 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9d3adc59-2fa5-4081-acf7-fb99c5b37340-socket-dir\") pod \"csi-node-driver-szjdz\" (UID: \"9d3adc59-2fa5-4081-acf7-fb99c5b37340\") " pod="calico-system/csi-node-driver-szjdz" Feb 9 19:16:05.756032 kubelet[3035]: I0209 19:16:05.755920 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpksq\" (UniqueName: \"kubernetes.io/projected/9d3adc59-2fa5-4081-acf7-fb99c5b37340-kube-api-access-hpksq\") pod \"csi-node-driver-szjdz\" (UID: \"9d3adc59-2fa5-4081-acf7-fb99c5b37340\") " pod="calico-system/csi-node-driver-szjdz" Feb 9 19:16:05.763829 kubelet[3035]: E0209 19:16:05.761605 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.763829 kubelet[3035]: W0209 19:16:05.761668 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.763829 kubelet[3035]: E0209 19:16:05.761732 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.770964 kubelet[3035]: E0209 19:16:05.766423 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.770964 kubelet[3035]: W0209 19:16:05.766480 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.770964 kubelet[3035]: E0209 19:16:05.766570 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.770964 kubelet[3035]: E0209 19:16:05.767053 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.770964 kubelet[3035]: W0209 19:16:05.767077 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.770964 kubelet[3035]: E0209 19:16:05.767112 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.770964 kubelet[3035]: E0209 19:16:05.767445 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.770964 kubelet[3035]: W0209 19:16:05.767461 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.770964 kubelet[3035]: E0209 19:16:05.767597 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.770964 kubelet[3035]: E0209 19:16:05.769250 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.771668 kubelet[3035]: W0209 19:16:05.769289 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.771668 kubelet[3035]: E0209 19:16:05.769472 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.771668 kubelet[3035]: E0209 19:16:05.769843 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.771668 kubelet[3035]: W0209 19:16:05.769875 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.771668 kubelet[3035]: E0209 19:16:05.769908 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.771668 kubelet[3035]: E0209 19:16:05.770342 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.771668 kubelet[3035]: W0209 19:16:05.770364 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.771668 kubelet[3035]: E0209 19:16:05.770398 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.771668 kubelet[3035]: E0209 19:16:05.770821 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.771668 kubelet[3035]: W0209 19:16:05.770839 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.772235 kubelet[3035]: E0209 19:16:05.770898 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.772235 kubelet[3035]: E0209 19:16:05.771338 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.772235 kubelet[3035]: W0209 19:16:05.771357 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.772235 kubelet[3035]: E0209 19:16:05.771382 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.772235 kubelet[3035]: E0209 19:16:05.771912 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.772235 kubelet[3035]: W0209 19:16:05.771930 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.772235 kubelet[3035]: E0209 19:16:05.771989 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.784019 kubelet[3035]: E0209 19:16:05.783954 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.784019 kubelet[3035]: W0209 19:16:05.783995 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.784239 kubelet[3035]: E0209 19:16:05.784037 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.856664 kubelet[3035]: E0209 19:16:05.856613 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.856664 kubelet[3035]: W0209 19:16:05.856655 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.856950 kubelet[3035]: E0209 19:16:05.856698 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.857298 kubelet[3035]: E0209 19:16:05.857253 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.857298 kubelet[3035]: W0209 19:16:05.857291 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.857503 kubelet[3035]: E0209 19:16:05.857330 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.857951 kubelet[3035]: E0209 19:16:05.857901 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.857951 kubelet[3035]: W0209 19:16:05.857943 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.858150 kubelet[3035]: E0209 19:16:05.857982 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.858812 kubelet[3035]: E0209 19:16:05.858518 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.858812 kubelet[3035]: W0209 19:16:05.858560 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.858812 kubelet[3035]: E0209 19:16:05.858618 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.860818 kubelet[3035]: E0209 19:16:05.859184 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.860818 kubelet[3035]: W0209 19:16:05.859226 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.860818 kubelet[3035]: E0209 19:16:05.859267 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.860818 kubelet[3035]: E0209 19:16:05.859816 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.860818 kubelet[3035]: W0209 19:16:05.859845 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.860818 kubelet[3035]: E0209 19:16:05.859881 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.860818 kubelet[3035]: E0209 19:16:05.860419 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.860818 kubelet[3035]: W0209 19:16:05.860450 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.860818 kubelet[3035]: E0209 19:16:05.860486 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.863010 kubelet[3035]: E0209 19:16:05.862957 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.863010 kubelet[3035]: W0209 19:16:05.862999 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.863357 kubelet[3035]: E0209 19:16:05.863322 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.864880 kubelet[3035]: E0209 19:16:05.863489 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.864880 kubelet[3035]: W0209 19:16:05.863524 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.864880 kubelet[3035]: E0209 19:16:05.864050 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.864880 kubelet[3035]: W0209 19:16:05.864081 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.864880 kubelet[3035]: E0209 19:16:05.864580 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.864880 kubelet[3035]: W0209 19:16:05.864610 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.865346 kubelet[3035]: E0209 19:16:05.865177 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.865346 kubelet[3035]: W0209 19:16:05.865207 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.865576 kubelet[3035]: E0209 19:16:05.865526 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.865812 kubelet[3035]: E0209 19:16:05.865650 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.866003 kubelet[3035]: W0209 19:16:05.865962 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.866173 kubelet[3035]: E0209 19:16:05.866145 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.870357 kubelet[3035]: E0209 19:16:05.865673 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.870357 kubelet[3035]: E0209 19:16:05.865693 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.870357 kubelet[3035]: E0209 19:16:05.865708 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.870933 kubelet[3035]: E0209 19:16:05.870900 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.871134 kubelet[3035]: W0209 19:16:05.871093 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.871314 kubelet[3035]: E0209 19:16:05.871287 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.871948 kubelet[3035]: E0209 19:16:05.871914 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.872173 kubelet[3035]: W0209 19:16:05.872129 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.872391 kubelet[3035]: E0209 19:16:05.872357 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.873685 kubelet[3035]: E0209 19:16:05.872817 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.873685 kubelet[3035]: W0209 19:16:05.872855 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.873685 kubelet[3035]: E0209 19:16:05.872905 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.874924 kubelet[3035]: E0209 19:16:05.874882 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.875241 kubelet[3035]: W0209 19:16:05.875198 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.875901 kubelet[3035]: E0209 19:16:05.875782 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.876545 kubelet[3035]: E0209 19:16:05.876488 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.877858 kubelet[3035]: W0209 19:16:05.877727 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.878409 kubelet[3035]: E0209 19:16:05.878339 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.879120 kubelet[3035]: E0209 19:16:05.879083 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.879451 kubelet[3035]: W0209 19:16:05.879383 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.879844 kubelet[3035]: E0209 19:16:05.879729 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.880928 kubelet[3035]: E0209 19:16:05.880889 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.881411 kubelet[3035]: W0209 19:16:05.881344 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.881806 kubelet[3035]: E0209 19:16:05.881720 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.884384 kubelet[3035]: E0209 19:16:05.884317 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.884940 kubelet[3035]: W0209 19:16:05.884708 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.885549 kubelet[3035]: E0209 19:16:05.885480 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.887842 kubelet[3035]: E0209 19:16:05.887723 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.888205 kubelet[3035]: W0209 19:16:05.888136 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.888539 kubelet[3035]: E0209 19:16:05.888508 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.889717 kubelet[3035]: E0209 19:16:05.889621 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.890235 kubelet[3035]: W0209 19:16:05.890165 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.890564 kubelet[3035]: E0209 19:16:05.890531 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.892745 kubelet[3035]: E0209 19:16:05.892671 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.893169 kubelet[3035]: W0209 19:16:05.893096 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.893533 kubelet[3035]: E0209 19:16:05.893464 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.894638 kubelet[3035]: E0209 19:16:05.894559 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.895101 kubelet[3035]: W0209 19:16:05.895026 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.896643 kubelet[3035]: E0209 19:16:05.896576 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.897908 kubelet[3035]: E0209 19:16:05.897838 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.898230 kubelet[3035]: W0209 19:16:05.898190 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.898645 kubelet[3035]: E0209 19:16:05.898570 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.899739 kubelet[3035]: E0209 19:16:05.899700 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.900127 kubelet[3035]: W0209 19:16:05.900054 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.900358 kubelet[3035]: E0209 19:16:05.900327 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.961559 kubelet[3035]: E0209 19:16:05.961507 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.961559 kubelet[3035]: W0209 19:16:05.961547 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.961850 kubelet[3035]: E0209 19:16:05.961591 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.962185 kubelet[3035]: E0209 19:16:05.962142 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.962302 kubelet[3035]: W0209 19:16:05.962177 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.962302 kubelet[3035]: E0209 19:16:05.962230 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:05.962775 kubelet[3035]: E0209 19:16:05.962718 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:05.962775 kubelet[3035]: W0209 19:16:05.962771 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:05.962966 kubelet[3035]: E0209 19:16:05.962807 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.063911 kubelet[3035]: E0209 19:16:06.063802 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.064127 kubelet[3035]: W0209 19:16:06.064100 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.064270 kubelet[3035]: E0209 19:16:06.064250 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.068952 kubelet[3035]: E0209 19:16:06.068914 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.069163 kubelet[3035]: W0209 19:16:06.069132 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.069318 kubelet[3035]: E0209 19:16:06.069296 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.070055 kubelet[3035]: E0209 19:16:06.070019 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.070322 kubelet[3035]: W0209 19:16:06.070291 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.070488 kubelet[3035]: E0209 19:16:06.070465 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.171257 kubelet[3035]: E0209 19:16:06.171225 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.171941 kubelet[3035]: W0209 19:16:06.171908 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.172184 kubelet[3035]: E0209 19:16:06.172162 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.173801 kubelet[3035]: E0209 19:16:06.173746 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.174044 kubelet[3035]: W0209 19:16:06.174012 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.174239 kubelet[3035]: E0209 19:16:06.174215 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.174960 kubelet[3035]: E0209 19:16:06.174917 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.175157 kubelet[3035]: W0209 19:16:06.175126 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.175313 kubelet[3035]: E0209 19:16:06.175291 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.276747 kubelet[3035]: E0209 19:16:06.276698 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.276933 kubelet[3035]: W0209 19:16:06.276783 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.276933 kubelet[3035]: E0209 19:16:06.276825 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.277536 kubelet[3035]: E0209 19:16:06.277474 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.277536 kubelet[3035]: W0209 19:16:06.277531 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.277738 kubelet[3035]: E0209 19:16:06.277593 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.278293 kubelet[3035]: E0209 19:16:06.278243 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.278293 kubelet[3035]: W0209 19:16:06.278285 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.278538 kubelet[3035]: E0209 19:16:06.278348 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.298559 kubelet[3035]: E0209 19:16:06.295710 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.298559 kubelet[3035]: W0209 19:16:06.295769 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.298559 kubelet[3035]: E0209 19:16:06.295808 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.380127 kubelet[3035]: E0209 19:16:06.379989 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.380369 kubelet[3035]: W0209 19:16:06.380330 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.380622 kubelet[3035]: E0209 19:16:06.380594 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.381247 kubelet[3035]: E0209 19:16:06.381213 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.381454 kubelet[3035]: W0209 19:16:06.381418 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.381609 kubelet[3035]: E0209 19:16:06.381584 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.485868 kubelet[3035]: E0209 19:16:06.485820 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.485868 kubelet[3035]: W0209 19:16:06.485858 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.486153 kubelet[3035]: E0209 19:16:06.485900 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.497304 kubelet[3035]: E0209 19:16:06.490911 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.497304 kubelet[3035]: W0209 19:16:06.490951 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.497304 kubelet[3035]: E0209 19:16:06.490992 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.507143 kubelet[3035]: E0209 19:16:06.507106 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.507473 kubelet[3035]: W0209 19:16:06.507365 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.507742 kubelet[3035]: E0209 19:16:06.507702 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.555800 env[1741]: time="2024-02-09T19:16:06.554993586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56f767cfb9-qf64t,Uid:ab876074-e9f4-4d2f-8f2e-9d4d50039cc4,Namespace:calico-system,Attempt:0,}" Feb 9 19:16:06.592566 kubelet[3035]: E0209 19:16:06.592400 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.592566 kubelet[3035]: W0209 19:16:06.592434 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.592566 kubelet[3035]: E0209 19:16:06.592475 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.599487 env[1741]: time="2024-02-09T19:16:06.598501278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:06.600002 env[1741]: time="2024-02-09T19:16:06.599924393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:06.600369 env[1741]: time="2024-02-09T19:16:06.600312404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:06.600870 env[1741]: time="2024-02-09T19:16:06.600803323Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/28fcc95023583bb7313dee6d8490e8d8c418c4871e2b45a3c9be91387170ebd7 pid=3528 runtime=io.containerd.runc.v2 Feb 9 19:16:06.695723 kubelet[3035]: E0209 19:16:06.694031 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.695723 kubelet[3035]: W0209 19:16:06.694067 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.695723 kubelet[3035]: E0209 19:16:06.694101 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.712882 kubelet[3035]: E0209 19:16:06.712847 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:06.713088 kubelet[3035]: W0209 19:16:06.713060 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:06.713249 kubelet[3035]: E0209 19:16:06.713226 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:06.743954 env[1741]: time="2024-02-09T19:16:06.743889911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kkzpq,Uid:9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9,Namespace:calico-system,Attempt:0,}" Feb 9 19:16:06.798044 env[1741]: time="2024-02-09T19:16:06.793182985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:06.798044 env[1741]: time="2024-02-09T19:16:06.793252346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:06.798044 env[1741]: time="2024-02-09T19:16:06.793278631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:06.798044 env[1741]: time="2024-02-09T19:16:06.793518558Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2edab0a454c7a6440358c2c33876fced3e3911c90aaa7c0fbdbf9cd6c3fcdb37 pid=3578 runtime=io.containerd.runc.v2 Feb 9 19:16:06.878412 env[1741]: time="2024-02-09T19:16:06.878353311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56f767cfb9-qf64t,Uid:ab876074-e9f4-4d2f-8f2e-9d4d50039cc4,Namespace:calico-system,Attempt:0,} returns sandbox id \"28fcc95023583bb7313dee6d8490e8d8c418c4871e2b45a3c9be91387170ebd7\"" Feb 9 19:16:06.886264 env[1741]: time="2024-02-09T19:16:06.886173039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 9 19:16:07.056000 audit[3623]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=3623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:07.065945 kubelet[3035]: E0209 19:16:07.065876 3035 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-szjdz" podUID=9d3adc59-2fa5-4081-acf7-fb99c5b37340 Feb 9 19:16:07.068842 kernel: audit: type=1325 audit(1707506167.056:280): table=filter:107 family=2 entries=14 op=nft_register_rule pid=3623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:07.056000 audit[3623]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffdd79c840 a2=0 a3=ffffa595d6c0 items=0 ppid=3237 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:07.056000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:07.108819 kernel: audit: type=1300 audit(1707506167.056:280): arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffdd79c840 a2=0 a3=ffffa595d6c0 items=0 ppid=3237 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:07.086000 audit[3623]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=3623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:07.124557 kernel: audit: type=1327 audit(1707506167.056:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:07.124677 kernel: audit: type=1325 audit(1707506167.086:281): table=nat:108 family=2 entries=20 op=nft_register_rule pid=3623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:07.086000 audit[3623]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffdd79c840 a2=0 a3=ffffa595d6c0 items=0 ppid=3237 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:07.086000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:07.170694 env[1741]: time="2024-02-09T19:16:07.170622876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kkzpq,Uid:9c8c6cb0-1cc1-4057-8fc0-e6a32e0317f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"2edab0a454c7a6440358c2c33876fced3e3911c90aaa7c0fbdbf9cd6c3fcdb37\"" Feb 9 19:16:08.319035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount769016691.mount: Deactivated successfully. Feb 9 19:16:09.062656 kubelet[3035]: E0209 19:16:09.062592 3035 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-szjdz" podUID=9d3adc59-2fa5-4081-acf7-fb99c5b37340 Feb 9 19:16:10.232831 env[1741]: time="2024-02-09T19:16:10.232737344Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:10.236088 env[1741]: time="2024-02-09T19:16:10.236019980Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:10.240532 env[1741]: time="2024-02-09T19:16:10.240475737Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:10.244718 env[1741]: time="2024-02-09T19:16:10.244662899Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:10.246365 env[1741]: time="2024-02-09T19:16:10.246308280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969\"" Feb 9 19:16:10.250785 env[1741]: time="2024-02-09T19:16:10.249422862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 19:16:10.282478 env[1741]: time="2024-02-09T19:16:10.282398262Z" level=info msg="CreateContainer within sandbox \"28fcc95023583bb7313dee6d8490e8d8c418c4871e2b45a3c9be91387170ebd7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 19:16:10.311265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459863863.mount: Deactivated successfully. Feb 9 19:16:10.315186 env[1741]: time="2024-02-09T19:16:10.315112712Z" level=info msg="CreateContainer within sandbox \"28fcc95023583bb7313dee6d8490e8d8c418c4871e2b45a3c9be91387170ebd7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5aad1c1cffbc3847516334dd518d228ad2e46d70c9e8f3b0bf0dd93388ac2a35\"" Feb 9 19:16:10.317481 env[1741]: time="2024-02-09T19:16:10.317420177Z" level=info msg="StartContainer for \"5aad1c1cffbc3847516334dd518d228ad2e46d70c9e8f3b0bf0dd93388ac2a35\"" Feb 9 19:16:10.509660 env[1741]: time="2024-02-09T19:16:10.505786665Z" level=info msg="StartContainer for \"5aad1c1cffbc3847516334dd518d228ad2e46d70c9e8f3b0bf0dd93388ac2a35\" returns successfully" Feb 9 19:16:11.063812 kubelet[3035]: E0209 19:16:11.063471 3035 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-szjdz" podUID=9d3adc59-2fa5-4081-acf7-fb99c5b37340 Feb 9 19:16:11.217024 kubelet[3035]: I0209 19:16:11.216971 3035 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-56f767cfb9-qf64t" podStartSLOduration=-9.223372030637865e+09 pod.CreationTimestamp="2024-02-09 19:16:05 +0000 UTC" firstStartedPulling="2024-02-09 19:16:06.885456585 +0000 UTC m=+22.469230747" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:11.216517338 +0000 UTC m=+26.800291524" watchObservedRunningTime="2024-02-09 19:16:11.216910321 +0000 UTC m=+26.800684507" Feb 9 19:16:11.263630 kubelet[3035]: E0209 19:16:11.254418 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.263630 kubelet[3035]: W0209 19:16:11.254458 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.263630 kubelet[3035]: E0209 19:16:11.254505 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.263630 kubelet[3035]: E0209 19:16:11.258911 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.263630 kubelet[3035]: W0209 19:16:11.258940 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.263630 kubelet[3035]: E0209 19:16:11.258976 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.263630 kubelet[3035]: E0209 19:16:11.259387 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.263630 kubelet[3035]: W0209 19:16:11.259409 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.263630 kubelet[3035]: E0209 19:16:11.259436 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.263630 kubelet[3035]: E0209 19:16:11.259903 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.264360 kubelet[3035]: W0209 19:16:11.259927 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.264360 kubelet[3035]: E0209 19:16:11.259957 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.264360 kubelet[3035]: E0209 19:16:11.260327 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.264360 kubelet[3035]: W0209 19:16:11.260368 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.264360 kubelet[3035]: E0209 19:16:11.260401 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.264360 kubelet[3035]: E0209 19:16:11.260826 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.264360 kubelet[3035]: W0209 19:16:11.260849 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.264360 kubelet[3035]: E0209 19:16:11.260881 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.264360 kubelet[3035]: E0209 19:16:11.261627 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.264360 kubelet[3035]: W0209 19:16:11.261655 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.265041 kubelet[3035]: E0209 19:16:11.261688 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.265041 kubelet[3035]: E0209 19:16:11.262133 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.265041 kubelet[3035]: W0209 19:16:11.262159 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.265041 kubelet[3035]: E0209 19:16:11.262191 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.265041 kubelet[3035]: E0209 19:16:11.262567 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.265041 kubelet[3035]: W0209 19:16:11.262588 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.265041 kubelet[3035]: E0209 19:16:11.262619 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.265041 kubelet[3035]: E0209 19:16:11.263060 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.265041 kubelet[3035]: W0209 19:16:11.263084 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.265041 kubelet[3035]: E0209 19:16:11.263115 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.265608 kubelet[3035]: E0209 19:16:11.263636 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.265608 kubelet[3035]: W0209 19:16:11.263661 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.265608 kubelet[3035]: E0209 19:16:11.263693 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.265608 kubelet[3035]: E0209 19:16:11.264260 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.265608 kubelet[3035]: W0209 19:16:11.264285 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.265608 kubelet[3035]: E0209 19:16:11.264315 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.335934 kubelet[3035]: E0209 19:16:11.333855 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.335934 kubelet[3035]: W0209 19:16:11.333902 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.335934 kubelet[3035]: E0209 19:16:11.333939 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.335934 kubelet[3035]: E0209 19:16:11.334438 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.335934 kubelet[3035]: W0209 19:16:11.334460 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.335934 kubelet[3035]: E0209 19:16:11.334493 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.335934 kubelet[3035]: E0209 19:16:11.334972 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.335934 kubelet[3035]: W0209 19:16:11.334994 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.335934 kubelet[3035]: E0209 19:16:11.335032 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.335934 kubelet[3035]: E0209 19:16:11.335524 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.336588 kubelet[3035]: W0209 19:16:11.335550 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.336588 kubelet[3035]: E0209 19:16:11.335781 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.336588 kubelet[3035]: E0209 19:16:11.336145 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.336588 kubelet[3035]: W0209 19:16:11.336170 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.336588 kubelet[3035]: E0209 19:16:11.336341 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.336588 kubelet[3035]: E0209 19:16:11.336623 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.336588 kubelet[3035]: W0209 19:16:11.336642 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.337171 kubelet[3035]: E0209 19:16:11.336850 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.337244 kubelet[3035]: E0209 19:16:11.337178 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.337244 kubelet[3035]: W0209 19:16:11.337200 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.337244 kubelet[3035]: E0209 19:16:11.337240 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.337810 kubelet[3035]: E0209 19:16:11.337735 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.337810 kubelet[3035]: W0209 19:16:11.337804 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.337987 kubelet[3035]: E0209 19:16:11.337899 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.338443 kubelet[3035]: E0209 19:16:11.338375 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.338557 kubelet[3035]: W0209 19:16:11.338442 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.338847 kubelet[3035]: E0209 19:16:11.338781 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.339126 kubelet[3035]: E0209 19:16:11.339099 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.339209 kubelet[3035]: W0209 19:16:11.339126 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.339356 kubelet[3035]: E0209 19:16:11.339328 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.339692 kubelet[3035]: E0209 19:16:11.339653 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.339692 kubelet[3035]: W0209 19:16:11.339686 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.340027 kubelet[3035]: E0209 19:16:11.339943 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.340726 kubelet[3035]: E0209 19:16:11.340680 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.340726 kubelet[3035]: W0209 19:16:11.340715 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.340957 kubelet[3035]: E0209 19:16:11.340818 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.341300 kubelet[3035]: E0209 19:16:11.341268 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.341397 kubelet[3035]: W0209 19:16:11.341300 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.341495 kubelet[3035]: E0209 19:16:11.341468 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.342158 kubelet[3035]: E0209 19:16:11.342124 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.342158 kubelet[3035]: W0209 19:16:11.342156 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.342378 kubelet[3035]: E0209 19:16:11.342340 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.342659 kubelet[3035]: E0209 19:16:11.342629 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.342738 kubelet[3035]: W0209 19:16:11.342658 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.342916 kubelet[3035]: E0209 19:16:11.342872 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.343173 kubelet[3035]: E0209 19:16:11.343142 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.343264 kubelet[3035]: W0209 19:16:11.343173 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.343264 kubelet[3035]: E0209 19:16:11.343214 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.343615 kubelet[3035]: E0209 19:16:11.343565 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.343615 kubelet[3035]: W0209 19:16:11.343612 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.343816 kubelet[3035]: E0209 19:16:11.343647 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.344717 kubelet[3035]: E0209 19:16:11.344681 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:11.344717 kubelet[3035]: W0209 19:16:11.344714 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:11.344968 kubelet[3035]: E0209 19:16:11.344780 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:11.937812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2037468348.mount: Deactivated successfully. Feb 9 19:16:12.088309 env[1741]: time="2024-02-09T19:16:12.088233867Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:12.091618 env[1741]: time="2024-02-09T19:16:12.091550348Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:12.094232 env[1741]: time="2024-02-09T19:16:12.094171985Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:12.096841 env[1741]: time="2024-02-09T19:16:12.096746909Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:12.097975 env[1741]: time="2024-02-09T19:16:12.097928376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57\"" Feb 9 19:16:12.104441 env[1741]: time="2024-02-09T19:16:12.104357112Z" level=info msg="CreateContainer within sandbox \"2edab0a454c7a6440358c2c33876fced3e3911c90aaa7c0fbdbf9cd6c3fcdb37\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 19:16:12.143814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount796268703.mount: Deactivated successfully. Feb 9 19:16:12.163555 env[1741]: time="2024-02-09T19:16:12.163452558Z" level=info msg="CreateContainer within sandbox \"2edab0a454c7a6440358c2c33876fced3e3911c90aaa7c0fbdbf9cd6c3fcdb37\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b741a64727c5e2ed54976d385c66952721ac2f31207b547ca2ef75ba54741026\"" Feb 9 19:16:12.164850 env[1741]: time="2024-02-09T19:16:12.164720739Z" level=info msg="StartContainer for \"b741a64727c5e2ed54976d385c66952721ac2f31207b547ca2ef75ba54741026\"" Feb 9 19:16:12.272235 kubelet[3035]: E0209 19:16:12.272099 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.273590 kubelet[3035]: W0209 19:16:12.273266 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.273590 kubelet[3035]: E0209 19:16:12.273367 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.274241 kubelet[3035]: E0209 19:16:12.274215 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.274425 kubelet[3035]: W0209 19:16:12.274398 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.274570 kubelet[3035]: E0209 19:16:12.274548 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.275460 kubelet[3035]: E0209 19:16:12.275427 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.275672 kubelet[3035]: W0209 19:16:12.275643 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.276682 kubelet[3035]: E0209 19:16:12.276652 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.278053 kubelet[3035]: E0209 19:16:12.278019 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.278237 kubelet[3035]: W0209 19:16:12.278209 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.278364 kubelet[3035]: E0209 19:16:12.278342 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.283954 kubelet[3035]: E0209 19:16:12.283917 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.284200 kubelet[3035]: W0209 19:16:12.284169 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.284339 kubelet[3035]: E0209 19:16:12.284315 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.286005 kubelet[3035]: E0209 19:16:12.285969 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.286205 kubelet[3035]: W0209 19:16:12.286175 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.286339 kubelet[3035]: E0209 19:16:12.286315 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.289940 kubelet[3035]: E0209 19:16:12.289904 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.290172 kubelet[3035]: W0209 19:16:12.290141 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.290325 kubelet[3035]: E0209 19:16:12.290302 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.291680 kubelet[3035]: E0209 19:16:12.291643 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.291959 kubelet[3035]: W0209 19:16:12.291927 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.292102 kubelet[3035]: E0209 19:16:12.292078 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.292593 kubelet[3035]: E0209 19:16:12.292566 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.292786 kubelet[3035]: W0209 19:16:12.292721 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.292919 kubelet[3035]: E0209 19:16:12.292897 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.293748 kubelet[3035]: E0209 19:16:12.293626 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.293993 kubelet[3035]: W0209 19:16:12.293962 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.294126 kubelet[3035]: E0209 19:16:12.294104 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.294629 kubelet[3035]: E0209 19:16:12.294584 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.294875 kubelet[3035]: W0209 19:16:12.294847 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.295023 kubelet[3035]: E0209 19:16:12.295000 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.295591 kubelet[3035]: E0209 19:16:12.295498 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.303948 kubelet[3035]: W0209 19:16:12.303866 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.303948 kubelet[3035]: E0209 19:16:12.303952 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.318000 audit[3770]: NETFILTER_CFG table=filter:109 family=2 entries=13 op=nft_register_rule pid=3770 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:12.321419 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 9 19:16:12.321547 kernel: audit: type=1325 audit(1707506172.318:282): table=filter:109 family=2 entries=13 op=nft_register_rule pid=3770 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:12.318000 audit[3770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffee982460 a2=0 a3=ffff976636c0 items=0 ppid=3237 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:12.339878 kernel: audit: type=1300 audit(1707506172.318:282): arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffee982460 a2=0 a3=ffff976636c0 items=0 ppid=3237 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:12.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:12.346572 kernel: audit: type=1327 audit(1707506172.318:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:12.347934 kubelet[3035]: E0209 19:16:12.347895 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.348203 kubelet[3035]: W0209 19:16:12.348166 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.348429 kubelet[3035]: E0209 19:16:12.348375 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.349330 kubelet[3035]: E0209 19:16:12.349296 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.349549 kubelet[3035]: W0209 19:16:12.349515 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.349774 kubelet[3035]: E0209 19:16:12.349719 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.318000 audit[3770]: NETFILTER_CFG table=nat:110 family=2 entries=27 op=nft_register_chain pid=3770 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:12.356213 kernel: audit: type=1325 audit(1707506172.318:283): table=nat:110 family=2 entries=27 op=nft_register_chain pid=3770 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:12.356409 kubelet[3035]: E0209 19:16:12.356372 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.356514 kubelet[3035]: W0209 19:16:12.356410 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.356514 kubelet[3035]: E0209 19:16:12.356460 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.358299 kubelet[3035]: E0209 19:16:12.358247 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.358527 kubelet[3035]: W0209 19:16:12.358496 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.359269 kubelet[3035]: E0209 19:16:12.359237 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.359550 kubelet[3035]: W0209 19:16:12.359517 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.360141 kubelet[3035]: E0209 19:16:12.360106 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.360410 kubelet[3035]: E0209 19:16:12.360361 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.360858 kubelet[3035]: E0209 19:16:12.360831 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.361022 kubelet[3035]: W0209 19:16:12.360992 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.361215 kubelet[3035]: E0209 19:16:12.361191 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.318000 audit[3770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffee982460 a2=0 a3=ffff976636c0 items=0 ppid=3237 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:12.362135 kubelet[3035]: E0209 19:16:12.362089 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.362339 kubelet[3035]: W0209 19:16:12.362306 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.362516 kubelet[3035]: E0209 19:16:12.362491 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.363185 kubelet[3035]: E0209 19:16:12.363157 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.363341 kubelet[3035]: W0209 19:16:12.363315 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.363481 kubelet[3035]: E0209 19:16:12.363460 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.364035 kubelet[3035]: E0209 19:16:12.364012 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.364173 kubelet[3035]: W0209 19:16:12.364149 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.364310 kubelet[3035]: E0209 19:16:12.364290 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.365445 kubelet[3035]: E0209 19:16:12.365414 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.365627 kubelet[3035]: W0209 19:16:12.365601 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.365786 kubelet[3035]: E0209 19:16:12.365747 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.366244 kubelet[3035]: E0209 19:16:12.366223 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.366374 kubelet[3035]: W0209 19:16:12.366350 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.366514 kubelet[3035]: E0209 19:16:12.366492 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.367146 kubelet[3035]: E0209 19:16:12.367105 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.367304 kubelet[3035]: W0209 19:16:12.367278 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.367446 kubelet[3035]: E0209 19:16:12.367426 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.368187 kubelet[3035]: E0209 19:16:12.368163 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.368335 kubelet[3035]: W0209 19:16:12.368310 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.368473 kubelet[3035]: E0209 19:16:12.368452 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.369069 kubelet[3035]: E0209 19:16:12.369007 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.369251 kubelet[3035]: W0209 19:16:12.369194 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.369395 kubelet[3035]: E0209 19:16:12.369375 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.369838 kubelet[3035]: E0209 19:16:12.369804 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.369972 kubelet[3035]: W0209 19:16:12.369948 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.370091 kubelet[3035]: E0209 19:16:12.370070 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.370604 kubelet[3035]: E0209 19:16:12.370582 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.370731 kubelet[3035]: W0209 19:16:12.370706 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.370871 kubelet[3035]: E0209 19:16:12.370850 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.371340 kubelet[3035]: E0209 19:16:12.371314 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.371494 kubelet[3035]: W0209 19:16:12.371466 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.371627 kubelet[3035]: E0209 19:16:12.371605 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.372431 kubelet[3035]: E0209 19:16:12.372401 3035 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:16:12.372677 kubelet[3035]: W0209 19:16:12.372649 3035 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:16:12.372834 kubelet[3035]: E0209 19:16:12.372811 3035 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:16:12.373490 kernel: audit: type=1300 audit(1707506172.318:283): arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffee982460 a2=0 a3=ffff976636c0 items=0 ppid=3237 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:12.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:12.382909 kernel: audit: type=1327 audit(1707506172.318:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:12.410556 env[1741]: time="2024-02-09T19:16:12.410432613Z" level=info msg="StartContainer for \"b741a64727c5e2ed54976d385c66952721ac2f31207b547ca2ef75ba54741026\" returns successfully" Feb 9 19:16:12.480784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b741a64727c5e2ed54976d385c66952721ac2f31207b547ca2ef75ba54741026-rootfs.mount: Deactivated successfully. Feb 9 19:16:12.766804 env[1741]: time="2024-02-09T19:16:12.766717626Z" level=info msg="shim disconnected" id=b741a64727c5e2ed54976d385c66952721ac2f31207b547ca2ef75ba54741026 Feb 9 19:16:12.767155 env[1741]: time="2024-02-09T19:16:12.767117977Z" level=warning msg="cleaning up after shim disconnected" id=b741a64727c5e2ed54976d385c66952721ac2f31207b547ca2ef75ba54741026 namespace=k8s.io Feb 9 19:16:12.767302 env[1741]: time="2024-02-09T19:16:12.767274435Z" level=info msg="cleaning up dead shim" Feb 9 19:16:12.787057 env[1741]: time="2024-02-09T19:16:12.786984039Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:16:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3813 runtime=io.containerd.runc.v2\n" Feb 9 19:16:13.064738 kubelet[3035]: E0209 19:16:13.064588 3035 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-szjdz" podUID=9d3adc59-2fa5-4081-acf7-fb99c5b37340 Feb 9 19:16:13.217273 env[1741]: time="2024-02-09T19:16:13.216178842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 19:16:14.545544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount626623499.mount: Deactivated successfully. Feb 9 19:16:15.064392 kubelet[3035]: E0209 19:16:15.064353 3035 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-szjdz" podUID=9d3adc59-2fa5-4081-acf7-fb99c5b37340 Feb 9 19:16:17.064124 kubelet[3035]: E0209 19:16:17.064082 3035 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-szjdz" podUID=9d3adc59-2fa5-4081-acf7-fb99c5b37340 Feb 9 19:16:19.063179 kubelet[3035]: E0209 19:16:19.063123 3035 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-szjdz" podUID=9d3adc59-2fa5-4081-acf7-fb99c5b37340 Feb 9 19:16:19.396575 env[1741]: time="2024-02-09T19:16:19.396365063Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:19.400045 env[1741]: time="2024-02-09T19:16:19.399987429Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:19.402500 env[1741]: time="2024-02-09T19:16:19.402442538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:19.405786 env[1741]: time="2024-02-09T19:16:19.405667909Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:19.407545 env[1741]: time="2024-02-09T19:16:19.407455263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b\"" Feb 9 19:16:19.414713 env[1741]: time="2024-02-09T19:16:19.414631249Z" level=info msg="CreateContainer within sandbox \"2edab0a454c7a6440358c2c33876fced3e3911c90aaa7c0fbdbf9cd6c3fcdb37\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 19:16:19.442598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4200321435.mount: Deactivated successfully. Feb 9 19:16:19.447875 env[1741]: time="2024-02-09T19:16:19.447805533Z" level=info msg="CreateContainer within sandbox \"2edab0a454c7a6440358c2c33876fced3e3911c90aaa7c0fbdbf9cd6c3fcdb37\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"601326a84d287822428b97f539903a57be6ab22ad9a9408313c26cfe0549899e\"" Feb 9 19:16:19.452030 env[1741]: time="2024-02-09T19:16:19.449957669Z" level=info msg="StartContainer for \"601326a84d287822428b97f539903a57be6ab22ad9a9408313c26cfe0549899e\"" Feb 9 19:16:19.608470 env[1741]: time="2024-02-09T19:16:19.608398177Z" level=info msg="StartContainer for \"601326a84d287822428b97f539903a57be6ab22ad9a9408313c26cfe0549899e\" returns successfully" Feb 9 19:16:20.663946 env[1741]: time="2024-02-09T19:16:20.663685098Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:16:20.708070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-601326a84d287822428b97f539903a57be6ab22ad9a9408313c26cfe0549899e-rootfs.mount: Deactivated successfully. Feb 9 19:16:20.739491 kubelet[3035]: I0209 19:16:20.738968 3035 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:16:20.774857 kubelet[3035]: I0209 19:16:20.774783 3035 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:16:20.788017 kubelet[3035]: I0209 19:16:20.787960 3035 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:16:20.797280 kubelet[3035]: I0209 19:16:20.797214 3035 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:16:20.920647 kubelet[3035]: I0209 19:16:20.920509 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e626e56e-c45b-4aee-b1a6-26ae1953bb79-config-volume\") pod \"coredns-787d4945fb-qfmtf\" (UID: \"e626e56e-c45b-4aee-b1a6-26ae1953bb79\") " pod="kube-system/coredns-787d4945fb-qfmtf" Feb 9 19:16:20.921201 kubelet[3035]: I0209 19:16:20.920923 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r878\" (UniqueName: \"kubernetes.io/projected/279e36fd-cfd0-4b8d-8437-2765b9919f84-kube-api-access-5r878\") pod \"coredns-787d4945fb-r6shr\" (UID: \"279e36fd-cfd0-4b8d-8437-2765b9919f84\") " pod="kube-system/coredns-787d4945fb-r6shr" Feb 9 19:16:20.921451 kubelet[3035]: I0209 19:16:20.921420 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw8xx\" (UniqueName: \"kubernetes.io/projected/e626e56e-c45b-4aee-b1a6-26ae1953bb79-kube-api-access-rw8xx\") pod \"coredns-787d4945fb-qfmtf\" (UID: \"e626e56e-c45b-4aee-b1a6-26ae1953bb79\") " pod="kube-system/coredns-787d4945fb-qfmtf" Feb 9 19:16:20.921671 kubelet[3035]: I0209 19:16:20.921642 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97d159a3-eb2b-41cd-8d25-e9b1cc42f426-tigera-ca-bundle\") pod \"calico-kube-controllers-59dd7c78-j47sd\" (UID: \"97d159a3-eb2b-41cd-8d25-e9b1cc42f426\") " pod="calico-system/calico-kube-controllers-59dd7c78-j47sd" Feb 9 19:16:20.921944 kubelet[3035]: I0209 19:16:20.921909 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/279e36fd-cfd0-4b8d-8437-2765b9919f84-config-volume\") pod \"coredns-787d4945fb-r6shr\" (UID: \"279e36fd-cfd0-4b8d-8437-2765b9919f84\") " pod="kube-system/coredns-787d4945fb-r6shr" Feb 9 19:16:20.922205 kubelet[3035]: I0209 19:16:20.922178 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k76dh\" (UniqueName: \"kubernetes.io/projected/97d159a3-eb2b-41cd-8d25-e9b1cc42f426-kube-api-access-k76dh\") pod \"calico-kube-controllers-59dd7c78-j47sd\" (UID: \"97d159a3-eb2b-41cd-8d25-e9b1cc42f426\") " pod="calico-system/calico-kube-controllers-59dd7c78-j47sd" Feb 9 19:16:21.083097 env[1741]: time="2024-02-09T19:16:21.083013755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-qfmtf,Uid:e626e56e-c45b-4aee-b1a6-26ae1953bb79,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:21.092434 env[1741]: time="2024-02-09T19:16:21.092369251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-szjdz,Uid:9d3adc59-2fa5-4081-acf7-fb99c5b37340,Namespace:calico-system,Attempt:0,}" Feb 9 19:16:21.108903 env[1741]: time="2024-02-09T19:16:21.108829292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r6shr,Uid:279e36fd-cfd0-4b8d-8437-2765b9919f84,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:21.120001 env[1741]: time="2024-02-09T19:16:21.119630624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59dd7c78-j47sd,Uid:97d159a3-eb2b-41cd-8d25-e9b1cc42f426,Namespace:calico-system,Attempt:0,}" Feb 9 19:16:21.831681 env[1741]: time="2024-02-09T19:16:21.831613703Z" level=info msg="shim disconnected" id=601326a84d287822428b97f539903a57be6ab22ad9a9408313c26cfe0549899e Feb 9 19:16:21.832672 env[1741]: time="2024-02-09T19:16:21.832625389Z" level=warning msg="cleaning up after shim disconnected" id=601326a84d287822428b97f539903a57be6ab22ad9a9408313c26cfe0549899e namespace=k8s.io Feb 9 19:16:21.832885 env[1741]: time="2024-02-09T19:16:21.832848298Z" level=info msg="cleaning up dead shim" Feb 9 19:16:21.872211 env[1741]: time="2024-02-09T19:16:21.872141925Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:16:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3892 runtime=io.containerd.runc.v2\n" Feb 9 19:16:22.106430 env[1741]: time="2024-02-09T19:16:22.106152852Z" level=error msg="Failed to destroy network for sandbox \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.107535 env[1741]: time="2024-02-09T19:16:22.107439663Z" level=error msg="encountered an error cleaning up failed sandbox \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.108865 env[1741]: time="2024-02-09T19:16:22.107556308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-qfmtf,Uid:e626e56e-c45b-4aee-b1a6-26ae1953bb79,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.109271 kubelet[3035]: E0209 19:16:22.107980 3035 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.109271 kubelet[3035]: E0209 19:16:22.108064 3035 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-qfmtf" Feb 9 19:16:22.109271 kubelet[3035]: E0209 19:16:22.108103 3035 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-qfmtf" Feb 9 19:16:22.110493 kubelet[3035]: E0209 19:16:22.108202 3035 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-qfmtf_kube-system(e626e56e-c45b-4aee-b1a6-26ae1953bb79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-qfmtf_kube-system(e626e56e-c45b-4aee-b1a6-26ae1953bb79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-qfmtf" podUID=e626e56e-c45b-4aee-b1a6-26ae1953bb79 Feb 9 19:16:22.118919 env[1741]: time="2024-02-09T19:16:22.118837316Z" level=error msg="Failed to destroy network for sandbox \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.119558 env[1741]: time="2024-02-09T19:16:22.119483392Z" level=error msg="encountered an error cleaning up failed sandbox \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.119720 env[1741]: time="2024-02-09T19:16:22.119588743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r6shr,Uid:279e36fd-cfd0-4b8d-8437-2765b9919f84,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.120129 kubelet[3035]: E0209 19:16:22.120077 3035 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.120319 kubelet[3035]: E0209 19:16:22.120168 3035 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-r6shr" Feb 9 19:16:22.120319 kubelet[3035]: E0209 19:16:22.120209 3035 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-r6shr" Feb 9 19:16:22.120319 kubelet[3035]: E0209 19:16:22.120309 3035 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-r6shr_kube-system(279e36fd-cfd0-4b8d-8437-2765b9919f84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-r6shr_kube-system(279e36fd-cfd0-4b8d-8437-2765b9919f84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-r6shr" podUID=279e36fd-cfd0-4b8d-8437-2765b9919f84 Feb 9 19:16:22.138948 env[1741]: time="2024-02-09T19:16:22.138867320Z" level=error msg="Failed to destroy network for sandbox \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.139975 env[1741]: time="2024-02-09T19:16:22.139884021Z" level=error msg="encountered an error cleaning up failed sandbox \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.140329 env[1741]: time="2024-02-09T19:16:22.140239056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59dd7c78-j47sd,Uid:97d159a3-eb2b-41cd-8d25-e9b1cc42f426,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.144901 kubelet[3035]: E0209 19:16:22.140904 3035 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.144901 kubelet[3035]: E0209 19:16:22.141014 3035 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59dd7c78-j47sd" Feb 9 19:16:22.144901 kubelet[3035]: E0209 19:16:22.141084 3035 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59dd7c78-j47sd" Feb 9 19:16:22.145511 kubelet[3035]: E0209 19:16:22.141200 3035 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59dd7c78-j47sd_calico-system(97d159a3-eb2b-41cd-8d25-e9b1cc42f426)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59dd7c78-j47sd_calico-system(97d159a3-eb2b-41cd-8d25-e9b1cc42f426)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59dd7c78-j47sd" podUID=97d159a3-eb2b-41cd-8d25-e9b1cc42f426 Feb 9 19:16:22.148833 env[1741]: time="2024-02-09T19:16:22.148575312Z" level=error msg="Failed to destroy network for sandbox \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.149802 env[1741]: time="2024-02-09T19:16:22.149667396Z" level=error msg="encountered an error cleaning up failed sandbox \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.150102 env[1741]: time="2024-02-09T19:16:22.150038633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-szjdz,Uid:9d3adc59-2fa5-4081-acf7-fb99c5b37340,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.150932 kubelet[3035]: E0209 19:16:22.150518 3035 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.150932 kubelet[3035]: E0209 19:16:22.150620 3035 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-szjdz" Feb 9 19:16:22.150932 kubelet[3035]: E0209 19:16:22.150688 3035 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-szjdz" Feb 9 19:16:22.153208 kubelet[3035]: E0209 19:16:22.153107 3035 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-szjdz_calico-system(9d3adc59-2fa5-4081-acf7-fb99c5b37340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-szjdz_calico-system(9d3adc59-2fa5-4081-acf7-fb99c5b37340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-szjdz" podUID=9d3adc59-2fa5-4081-acf7-fb99c5b37340 Feb 9 19:16:22.238011 kubelet[3035]: I0209 19:16:22.236996 3035 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Feb 9 19:16:22.242594 env[1741]: time="2024-02-09T19:16:22.242534536Z" level=info msg="StopPodSandbox for \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\"" Feb 9 19:16:22.243899 kubelet[3035]: I0209 19:16:22.243671 3035 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Feb 9 19:16:22.247356 env[1741]: time="2024-02-09T19:16:22.247280121Z" level=info msg="StopPodSandbox for \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\"" Feb 9 19:16:22.256357 env[1741]: time="2024-02-09T19:16:22.256295002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 19:16:22.258146 kubelet[3035]: I0209 19:16:22.257330 3035 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Feb 9 19:16:22.282616 env[1741]: time="2024-02-09T19:16:22.272233239Z" level=info msg="StopPodSandbox for \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\"" Feb 9 19:16:22.295843 kubelet[3035]: I0209 19:16:22.292909 3035 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Feb 9 19:16:22.296076 env[1741]: time="2024-02-09T19:16:22.294704198Z" level=info msg="StopPodSandbox for \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\"" Feb 9 19:16:22.361575 env[1741]: time="2024-02-09T19:16:22.361382417Z" level=error msg="StopPodSandbox for \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\" failed" error="failed to destroy network for sandbox \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.363806 kubelet[3035]: E0209 19:16:22.363059 3035 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Feb 9 19:16:22.363806 kubelet[3035]: E0209 19:16:22.363509 3035 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b} Feb 9 19:16:22.363806 kubelet[3035]: E0209 19:16:22.363612 3035 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e626e56e-c45b-4aee-b1a6-26ae1953bb79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:16:22.363806 kubelet[3035]: E0209 19:16:22.363706 3035 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e626e56e-c45b-4aee-b1a6-26ae1953bb79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-qfmtf" podUID=e626e56e-c45b-4aee-b1a6-26ae1953bb79 Feb 9 19:16:22.421553 env[1741]: time="2024-02-09T19:16:22.421463979Z" level=error msg="StopPodSandbox for \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\" failed" error="failed to destroy network for sandbox \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.422292 kubelet[3035]: E0209 19:16:22.421970 3035 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Feb 9 19:16:22.422292 kubelet[3035]: E0209 19:16:22.422040 3035 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7} Feb 9 19:16:22.422292 kubelet[3035]: E0209 19:16:22.422111 3035 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"97d159a3-eb2b-41cd-8d25-e9b1cc42f426\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:16:22.422292 kubelet[3035]: E0209 19:16:22.422168 3035 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"97d159a3-eb2b-41cd-8d25-e9b1cc42f426\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59dd7c78-j47sd" podUID=97d159a3-eb2b-41cd-8d25-e9b1cc42f426 Feb 9 19:16:22.448656 env[1741]: time="2024-02-09T19:16:22.448569291Z" level=error msg="StopPodSandbox for \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\" failed" error="failed to destroy network for sandbox \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.449737 kubelet[3035]: E0209 19:16:22.449378 3035 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Feb 9 19:16:22.449737 kubelet[3035]: E0209 19:16:22.449465 3035 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23} Feb 9 19:16:22.449737 kubelet[3035]: E0209 19:16:22.449556 3035 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d3adc59-2fa5-4081-acf7-fb99c5b37340\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:16:22.449737 kubelet[3035]: E0209 19:16:22.449635 3035 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d3adc59-2fa5-4081-acf7-fb99c5b37340\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-szjdz" podUID=9d3adc59-2fa5-4081-acf7-fb99c5b37340 Feb 9 19:16:22.450657 env[1741]: time="2024-02-09T19:16:22.450558335Z" level=error msg="StopPodSandbox for \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\" failed" error="failed to destroy network for sandbox \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:16:22.451448 kubelet[3035]: E0209 19:16:22.451092 3035 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Feb 9 19:16:22.451448 kubelet[3035]: E0209 19:16:22.451186 3035 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b} Feb 9 19:16:22.451448 kubelet[3035]: E0209 19:16:22.451286 3035 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"279e36fd-cfd0-4b8d-8437-2765b9919f84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:16:22.451448 kubelet[3035]: E0209 19:16:22.451365 3035 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"279e36fd-cfd0-4b8d-8437-2765b9919f84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-r6shr" podUID=279e36fd-cfd0-4b8d-8437-2765b9919f84 Feb 9 19:16:22.770742 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23-shm.mount: Deactivated successfully. Feb 9 19:16:22.771071 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b-shm.mount: Deactivated successfully. Feb 9 19:16:22.771362 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7-shm.mount: Deactivated successfully. Feb 9 19:16:22.771590 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b-shm.mount: Deactivated successfully. Feb 9 19:16:32.200694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1629130558.mount: Deactivated successfully. Feb 9 19:16:32.297620 env[1741]: time="2024-02-09T19:16:32.297564232Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:32.302045 env[1741]: time="2024-02-09T19:16:32.301999974Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:32.305957 env[1741]: time="2024-02-09T19:16:32.305913542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:32.309963 env[1741]: time="2024-02-09T19:16:32.309918861Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:32.311399 env[1741]: time="2024-02-09T19:16:32.311342026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774\"" Feb 9 19:16:32.343622 env[1741]: time="2024-02-09T19:16:32.343546120Z" level=info msg="CreateContainer within sandbox \"2edab0a454c7a6440358c2c33876fced3e3911c90aaa7c0fbdbf9cd6c3fcdb37\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 19:16:32.382928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2271179269.mount: Deactivated successfully. Feb 9 19:16:32.385931 env[1741]: time="2024-02-09T19:16:32.385862826Z" level=info msg="CreateContainer within sandbox \"2edab0a454c7a6440358c2c33876fced3e3911c90aaa7c0fbdbf9cd6c3fcdb37\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6f79caf321c8874bafdc8568d95e32e279ac0301f1bbc7f8f5c0fad242cb7994\"" Feb 9 19:16:32.390237 env[1741]: time="2024-02-09T19:16:32.390168616Z" level=info msg="StartContainer for \"6f79caf321c8874bafdc8568d95e32e279ac0301f1bbc7f8f5c0fad242cb7994\"" Feb 9 19:16:32.523032 env[1741]: time="2024-02-09T19:16:32.522721527Z" level=info msg="StartContainer for \"6f79caf321c8874bafdc8568d95e32e279ac0301f1bbc7f8f5c0fad242cb7994\" returns successfully" Feb 9 19:16:32.639806 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 19:16:32.640011 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 19:16:33.364712 kubelet[3035]: I0209 19:16:33.364660 3035 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-kkzpq" podStartSLOduration=-9.223372008490171e+09 pod.CreationTimestamp="2024-02-09 19:16:05 +0000 UTC" firstStartedPulling="2024-02-09 19:16:07.17321138 +0000 UTC m=+22.756985542" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:33.364535428 +0000 UTC m=+48.948309770" watchObservedRunningTime="2024-02-09 19:16:33.364604946 +0000 UTC m=+48.948379132" Feb 9 19:16:34.064353 env[1741]: time="2024-02-09T19:16:34.064286668Z" level=info msg="StopPodSandbox for \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\"" Feb 9 19:16:34.065256 env[1741]: time="2024-02-09T19:16:34.064628849Z" level=info msg="StopPodSandbox for \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\"" Feb 9 19:16:34.435000 audit[4294]: AVC avc: denied { write } for pid=4294 comm="tee" name="fd" dev="proc" ino=21174 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:16:34.445617 kernel: audit: type=1400 audit(1707506194.435:284): avc: denied { write } for pid=4294 comm="tee" name="fd" dev="proc" ino=21174 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:16:34.491000 audit[4296]: AVC avc: denied { write } for pid=4296 comm="tee" name="fd" dev="proc" ino=22037 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:16:34.491000 audit[4296]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffa2b7985 a2=241 a3=1b6 items=1 ppid=4256 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:34.526419 kernel: audit: type=1400 audit(1707506194.491:285): avc: denied { write } for pid=4296 comm="tee" name="fd" dev="proc" ino=22037 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:16:34.526642 kernel: audit: type=1300 audit(1707506194.491:285): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffa2b7985 a2=241 a3=1b6 items=1 ppid=4256 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:34.491000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 19:16:34.554396 kernel: audit: type=1307 audit(1707506194.491:285): cwd="/etc/service/enabled/felix/log" Feb 9 19:16:34.491000 audit: PATH item=0 name="/dev/fd/63" inode=21157 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:16:34.563603 kernel: audit: type=1302 audit(1707506194.491:285): item=0 name="/dev/fd/63" inode=21157 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:16:34.491000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:16:34.592365 kernel: audit: type=1327 audit(1707506194.491:285): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:16:34.435000 audit[4294]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff1496986 a2=241 a3=1b6 items=1 ppid=4263 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:34.608279 kernel: audit: type=1300 audit(1707506194.435:284): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff1496986 a2=241 a3=1b6 items=1 ppid=4263 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:34.608419 env[1741]: 2024-02-09 19:16:34.228 [INFO][4232] k8s.go 578: Cleaning up netns ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Feb 9 19:16:34.608419 env[1741]: 2024-02-09 19:16:34.229 [INFO][4232] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" iface="eth0" netns="/var/run/netns/cni-7566e14c-ffb4-3ad3-b5db-3f033ddae6c8" Feb 9 19:16:34.608419 env[1741]: 2024-02-09 19:16:34.229 [INFO][4232] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" iface="eth0" netns="/var/run/netns/cni-7566e14c-ffb4-3ad3-b5db-3f033ddae6c8" Feb 9 19:16:34.608419 env[1741]: 2024-02-09 19:16:34.229 [INFO][4232] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" iface="eth0" netns="/var/run/netns/cni-7566e14c-ffb4-3ad3-b5db-3f033ddae6c8" Feb 9 19:16:34.608419 env[1741]: 2024-02-09 19:16:34.229 [INFO][4232] k8s.go 585: Releasing IP address(es) ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Feb 9 19:16:34.608419 env[1741]: 2024-02-09 19:16:34.230 [INFO][4232] utils.go 188: Calico CNI releasing IP address ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Feb 9 19:16:34.608419 env[1741]: 2024-02-09 19:16:34.504 [INFO][4259] ipam_plugin.go 415: Releasing address using handleID ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" HandleID="k8s-pod-network.7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:34.608419 env[1741]: 2024-02-09 19:16:34.506 [INFO][4259] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:34.608419 env[1741]: 2024-02-09 19:16:34.507 [INFO][4259] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:34.608419 env[1741]: 2024-02-09 19:16:34.569 [WARNING][4259] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" HandleID="k8s-pod-network.7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:34.608419 env[1741]: 2024-02-09 19:16:34.569 [INFO][4259] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" HandleID="k8s-pod-network.7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:34.608419 env[1741]: 2024-02-09 19:16:34.574 [INFO][4259] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:34.608419 env[1741]: 2024-02-09 19:16:34.584 [INFO][4232] k8s.go 591: Teardown processing complete. ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Feb 9 19:16:34.616440 kernel: audit: type=1307 audit(1707506194.435:284): cwd="/etc/service/enabled/bird/log" Feb 9 19:16:34.435000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 19:16:34.616995 env[1741]: time="2024-02-09T19:16:34.616927774Z" level=info msg="TearDown network for sandbox \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\" successfully" Feb 9 19:16:34.617395 env[1741]: time="2024-02-09T19:16:34.617345965Z" level=info msg="StopPodSandbox for \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\" returns successfully" Feb 9 19:16:34.629870 kernel: audit: type=1302 audit(1707506194.435:284): item=0 name="/dev/fd/63" inode=21158 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:16:34.435000 audit: PATH item=0 name="/dev/fd/63" inode=21158 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:16:34.622972 systemd[1]: run-netns-cni\x2d7566e14c\x2dffb4\x2d3ad3\x2db5db\x2d3f033ddae6c8.mount: Deactivated successfully. Feb 9 19:16:34.630597 env[1741]: time="2024-02-09T19:16:34.619946778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r6shr,Uid:279e36fd-cfd0-4b8d-8437-2765b9919f84,Namespace:kube-system,Attempt:1,}" Feb 9 19:16:34.641295 kernel: audit: type=1327 audit(1707506194.435:284): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:16:34.435000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:16:34.646000 audit[4326]: AVC avc: denied { write } for pid=4326 comm="tee" name="fd" dev="proc" ino=22056 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:16:34.646000 audit[4326]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc1e81976 a2=241 a3=1b6 items=1 ppid=4265 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:34.646000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 19:16:34.646000 audit: PATH item=0 name="/dev/fd/63" inode=21189 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:16:34.646000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:16:34.657000 audit[4332]: AVC avc: denied { write } for pid=4332 comm="tee" name="fd" dev="proc" ino=22060 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:16:34.657000 audit[4332]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff86ed987 a2=241 a3=1b6 items=1 ppid=4257 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:34.657000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 19:16:34.657000 audit: PATH item=0 name="/dev/fd/63" inode=22051 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:16:34.657000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:16:34.665703 env[1741]: 2024-02-09 19:16:34.204 [INFO][4231] k8s.go 578: Cleaning up netns ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Feb 9 19:16:34.665703 env[1741]: 2024-02-09 19:16:34.205 [INFO][4231] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" iface="eth0" netns="/var/run/netns/cni-053a6c09-081c-1b74-94e2-e4e897e1e43c" Feb 9 19:16:34.665703 env[1741]: 2024-02-09 19:16:34.209 [INFO][4231] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" iface="eth0" netns="/var/run/netns/cni-053a6c09-081c-1b74-94e2-e4e897e1e43c" Feb 9 19:16:34.665703 env[1741]: 2024-02-09 19:16:34.209 [INFO][4231] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" iface="eth0" netns="/var/run/netns/cni-053a6c09-081c-1b74-94e2-e4e897e1e43c" Feb 9 19:16:34.665703 env[1741]: 2024-02-09 19:16:34.210 [INFO][4231] k8s.go 585: Releasing IP address(es) ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Feb 9 19:16:34.665703 env[1741]: 2024-02-09 19:16:34.210 [INFO][4231] utils.go 188: Calico CNI releasing IP address ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Feb 9 19:16:34.665703 env[1741]: 2024-02-09 19:16:34.504 [INFO][4243] ipam_plugin.go 415: Releasing address using handleID ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" HandleID="k8s-pod-network.4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Workload="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:34.665703 env[1741]: 2024-02-09 19:16:34.507 [INFO][4243] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:34.665703 env[1741]: 2024-02-09 19:16:34.574 [INFO][4243] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:34.665703 env[1741]: 2024-02-09 19:16:34.632 [WARNING][4243] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" HandleID="k8s-pod-network.4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Workload="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:34.665703 env[1741]: 2024-02-09 19:16:34.632 [INFO][4243] ipam_plugin.go 443: Releasing address using workloadID ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" HandleID="k8s-pod-network.4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Workload="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:34.665703 env[1741]: 2024-02-09 19:16:34.642 [INFO][4243] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:34.665703 env[1741]: 2024-02-09 19:16:34.661 [INFO][4231] k8s.go 591: Teardown processing complete. ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Feb 9 19:16:34.667000 audit[4336]: AVC avc: denied { write } for pid=4336 comm="tee" name="fd" dev="proc" ino=22066 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:16:34.667000 audit[4336]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff699f975 a2=241 a3=1b6 items=1 ppid=4264 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:34.667000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 19:16:34.667000 audit: PATH item=0 name="/dev/fd/63" inode=21199 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:16:34.667000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:16:34.672042 env[1741]: time="2024-02-09T19:16:34.671976196Z" level=info msg="TearDown network for sandbox \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\" successfully" Feb 9 19:16:34.672260 env[1741]: time="2024-02-09T19:16:34.672219067Z" level=info msg="StopPodSandbox for \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\" returns successfully" Feb 9 19:16:34.677247 systemd[1]: run-netns-cni\x2d053a6c09\x2d081c\x2d1b74\x2d94e2\x2de4e897e1e43c.mount: Deactivated successfully. Feb 9 19:16:34.683725 env[1741]: time="2024-02-09T19:16:34.683411258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59dd7c78-j47sd,Uid:97d159a3-eb2b-41cd-8d25-e9b1cc42f426,Namespace:calico-system,Attempt:1,}" Feb 9 19:16:34.693000 audit[4334]: AVC avc: denied { write } for pid=4334 comm="tee" name="fd" dev="proc" ino=21203 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:16:34.693000 audit[4334]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdda27985 a2=241 a3=1b6 items=1 ppid=4261 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:34.693000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 19:16:34.693000 audit: PATH item=0 name="/dev/fd/63" inode=22053 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:16:34.693000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:16:34.730000 audit[4341]: AVC avc: denied { write } for pid=4341 comm="tee" name="fd" dev="proc" ino=22075 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:16:34.730000 audit[4341]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc489d985 a2=241 a3=1b6 items=1 ppid=4266 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:34.730000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 19:16:34.730000 audit: PATH item=0 name="/dev/fd/63" inode=21200 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:16:34.730000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:16:35.088835 env[1741]: time="2024-02-09T19:16:35.088633846Z" level=info msg="StopPodSandbox for \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\"" Feb 9 19:16:35.821091 systemd-networkd[1533]: cali01faa0bbf5e: Link UP Feb 9 19:16:35.823244 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:16:35.823400 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali01faa0bbf5e: link becomes ready Feb 9 19:16:35.823650 systemd-networkd[1533]: cali01faa0bbf5e: Gained carrier Feb 9 19:16:35.836153 (udev-worker)[4470]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.325 [INFO][4356] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0 coredns-787d4945fb- kube-system 279e36fd-cfd0-4b8d-8437-2765b9919f84 688 0 2024-02-09 19:15:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-155 coredns-787d4945fb-r6shr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali01faa0bbf5e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" Namespace="kube-system" Pod="coredns-787d4945fb-r6shr" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-" Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.326 [INFO][4356] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" Namespace="kube-system" Pod="coredns-787d4945fb-r6shr" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.657 [INFO][4417] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" HandleID="k8s-pod-network.027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.694 [INFO][4417] ipam_plugin.go 268: Auto assigning IP ContainerID="027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" HandleID="k8s-pod-network.027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002031a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-155", "pod":"coredns-787d4945fb-r6shr", "timestamp":"2024-02-09 19:16:35.656592046 +0000 UTC"}, Hostname:"ip-172-31-18-155", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.694 [INFO][4417] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.695 [INFO][4417] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.695 [INFO][4417] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-155' Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.709 [INFO][4417] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" host="ip-172-31-18-155" Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.725 [INFO][4417] ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-155" Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.749 [INFO][4417] ipam.go 489: Trying affinity for 192.168.95.64/26 host="ip-172-31-18-155" Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.751 [INFO][4417] ipam.go 155: Attempting to load block cidr=192.168.95.64/26 host="ip-172-31-18-155" Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.757 [INFO][4417] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ip-172-31-18-155" Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.757 [INFO][4417] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" host="ip-172-31-18-155" Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.760 [INFO][4417] ipam.go 1682: Creating new handle: k8s-pod-network.027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.768 [INFO][4417] ipam.go 1203: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" host="ip-172-31-18-155" Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.785 [INFO][4417] ipam.go 1216: Successfully claimed IPs: [192.168.95.65/26] block=192.168.95.64/26 handle="k8s-pod-network.027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" host="ip-172-31-18-155" Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.785 [INFO][4417] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.95.65/26] handle="k8s-pod-network.027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" host="ip-172-31-18-155" Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.785 [INFO][4417] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:35.865506 env[1741]: 2024-02-09 19:16:35.785 [INFO][4417] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.95.65/26] IPv6=[] ContainerID="027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" HandleID="k8s-pod-network.027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:35.867138 env[1741]: 2024-02-09 19:16:35.794 [INFO][4356] k8s.go 385: Populated endpoint ContainerID="027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" Namespace="kube-system" Pod="coredns-787d4945fb-r6shr" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"279e36fd-cfd0-4b8d-8437-2765b9919f84", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"", Pod:"coredns-787d4945fb-r6shr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01faa0bbf5e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:35.867138 env[1741]: 2024-02-09 19:16:35.794 [INFO][4356] k8s.go 386: Calico CNI using IPs: [192.168.95.65/32] ContainerID="027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" Namespace="kube-system" Pod="coredns-787d4945fb-r6shr" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:35.867138 env[1741]: 2024-02-09 19:16:35.794 [INFO][4356] dataplane_linux.go 68: Setting the host side veth name to cali01faa0bbf5e ContainerID="027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" Namespace="kube-system" Pod="coredns-787d4945fb-r6shr" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:35.867138 env[1741]: 2024-02-09 19:16:35.825 [INFO][4356] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" Namespace="kube-system" Pod="coredns-787d4945fb-r6shr" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:35.867138 env[1741]: 2024-02-09 19:16:35.831 [INFO][4356] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" Namespace="kube-system" Pod="coredns-787d4945fb-r6shr" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"279e36fd-cfd0-4b8d-8437-2765b9919f84", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d", Pod:"coredns-787d4945fb-r6shr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01faa0bbf5e", MAC:"72:de:89:3b:00:ec", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:35.867138 env[1741]: 2024-02-09 19:16:35.855 [INFO][4356] k8s.go 491: Wrote updated endpoint to datastore ContainerID="027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d" Namespace="kube-system" Pod="coredns-787d4945fb-r6shr" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:35.938625 (udev-worker)[4469]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:35.979495 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0fd0eacd73f: link becomes ready Feb 9 19:16:35.978647 systemd-networkd[1533]: cali0fd0eacd73f: Link UP Feb 9 19:16:35.979197 systemd-networkd[1533]: cali0fd0eacd73f: Gained carrier Feb 9 19:16:36.000858 env[1741]: time="2024-02-09T19:16:35.999631449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:36.000858 env[1741]: time="2024-02-09T19:16:35.999844278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:36.000858 env[1741]: time="2024-02-09T19:16:35.999942353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:36.000858 env[1741]: time="2024-02-09T19:16:36.000387651Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d pid=4491 runtime=io.containerd.runc.v2 Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.369 [INFO][4363] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0 calico-kube-controllers-59dd7c78- calico-system 97d159a3-eb2b-41cd-8d25-e9b1cc42f426 687 0 2024-02-09 19:16:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59dd7c78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-155 calico-kube-controllers-59dd7c78-j47sd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0fd0eacd73f [] []}} ContainerID="986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" Namespace="calico-system" Pod="calico-kube-controllers-59dd7c78-j47sd" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-" Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.369 [INFO][4363] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" Namespace="calico-system" Pod="calico-kube-controllers-59dd7c78-j47sd" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.721 [INFO][4421] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" HandleID="k8s-pod-network.986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" Workload="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.763 [INFO][4421] ipam_plugin.go 268: Auto assigning IP ContainerID="986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" HandleID="k8s-pod-network.986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" Workload="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002b50a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-155", "pod":"calico-kube-controllers-59dd7c78-j47sd", "timestamp":"2024-02-09 19:16:35.721356302 +0000 UTC"}, Hostname:"ip-172-31-18-155", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.768 [INFO][4421] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.786 [INFO][4421] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.787 [INFO][4421] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-155' Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.795 [INFO][4421] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" host="ip-172-31-18-155" Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.832 [INFO][4421] ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-155" Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.852 [INFO][4421] ipam.go 489: Trying affinity for 192.168.95.64/26 host="ip-172-31-18-155" Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.859 [INFO][4421] ipam.go 155: Attempting to load block cidr=192.168.95.64/26 host="ip-172-31-18-155" Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.864 [INFO][4421] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ip-172-31-18-155" Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.865 [INFO][4421] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" host="ip-172-31-18-155" Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.872 [INFO][4421] ipam.go 1682: Creating new handle: k8s-pod-network.986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1 Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.900 [INFO][4421] ipam.go 1203: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" host="ip-172-31-18-155" Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.918 [INFO][4421] ipam.go 1216: Successfully claimed IPs: [192.168.95.66/26] block=192.168.95.64/26 handle="k8s-pod-network.986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" host="ip-172-31-18-155" Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.918 [INFO][4421] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.95.66/26] handle="k8s-pod-network.986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" host="ip-172-31-18-155" Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.918 [INFO][4421] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:36.051959 env[1741]: 2024-02-09 19:16:35.918 [INFO][4421] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.95.66/26] IPv6=[] ContainerID="986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" HandleID="k8s-pod-network.986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" Workload="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:36.053283 env[1741]: 2024-02-09 19:16:35.924 [INFO][4363] k8s.go 385: Populated endpoint ContainerID="986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" Namespace="calico-system" Pod="calico-kube-controllers-59dd7c78-j47sd" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0", GenerateName:"calico-kube-controllers-59dd7c78-", Namespace:"calico-system", SelfLink:"", UID:"97d159a3-eb2b-41cd-8d25-e9b1cc42f426", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 16, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59dd7c78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"", Pod:"calico-kube-controllers-59dd7c78-j47sd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0fd0eacd73f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:36.053283 env[1741]: 2024-02-09 19:16:35.924 [INFO][4363] k8s.go 386: Calico CNI using IPs: [192.168.95.66/32] ContainerID="986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" Namespace="calico-system" Pod="calico-kube-controllers-59dd7c78-j47sd" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:36.053283 env[1741]: 2024-02-09 19:16:35.924 [INFO][4363] dataplane_linux.go 68: Setting the host side veth name to cali0fd0eacd73f ContainerID="986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" Namespace="calico-system" Pod="calico-kube-controllers-59dd7c78-j47sd" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:36.053283 env[1741]: 2024-02-09 19:16:35.979 [INFO][4363] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" Namespace="calico-system" Pod="calico-kube-controllers-59dd7c78-j47sd" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:36.053283 env[1741]: 2024-02-09 19:16:36.007 [INFO][4363] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" Namespace="calico-system" Pod="calico-kube-controllers-59dd7c78-j47sd" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0", GenerateName:"calico-kube-controllers-59dd7c78-", Namespace:"calico-system", SelfLink:"", UID:"97d159a3-eb2b-41cd-8d25-e9b1cc42f426", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 16, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59dd7c78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1", Pod:"calico-kube-controllers-59dd7c78-j47sd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0fd0eacd73f", MAC:"22:6a:b2:4e:ab:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:36.053283 env[1741]: 2024-02-09 19:16:36.042 [INFO][4363] k8s.go 491: Wrote updated endpoint to datastore ContainerID="986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1" Namespace="calico-system" Pod="calico-kube-controllers-59dd7c78-j47sd" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:36.068816 env[1741]: 2024-02-09 19:16:35.639 [INFO][4403] k8s.go 578: Cleaning up netns ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Feb 9 19:16:36.068816 env[1741]: 2024-02-09 19:16:35.639 [INFO][4403] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" iface="eth0" netns="/var/run/netns/cni-6d4d6f56-8301-499b-2709-f034f6f5d2ec" Feb 9 19:16:36.068816 env[1741]: 2024-02-09 19:16:35.639 [INFO][4403] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" iface="eth0" netns="/var/run/netns/cni-6d4d6f56-8301-499b-2709-f034f6f5d2ec" Feb 9 19:16:36.068816 env[1741]: 2024-02-09 19:16:35.640 [INFO][4403] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" iface="eth0" netns="/var/run/netns/cni-6d4d6f56-8301-499b-2709-f034f6f5d2ec" Feb 9 19:16:36.068816 env[1741]: 2024-02-09 19:16:35.641 [INFO][4403] k8s.go 585: Releasing IP address(es) ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Feb 9 19:16:36.068816 env[1741]: 2024-02-09 19:16:35.641 [INFO][4403] utils.go 188: Calico CNI releasing IP address ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Feb 9 19:16:36.068816 env[1741]: 2024-02-09 19:16:35.798 [INFO][4458] ipam_plugin.go 415: Releasing address using handleID ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" HandleID="k8s-pod-network.ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Workload="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:36.068816 env[1741]: 2024-02-09 19:16:35.801 [INFO][4458] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:36.068816 env[1741]: 2024-02-09 19:16:35.919 [INFO][4458] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:36.068816 env[1741]: 2024-02-09 19:16:35.986 [WARNING][4458] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" HandleID="k8s-pod-network.ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Workload="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:36.068816 env[1741]: 2024-02-09 19:16:35.986 [INFO][4458] ipam_plugin.go 443: Releasing address using workloadID ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" HandleID="k8s-pod-network.ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Workload="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:36.068816 env[1741]: 2024-02-09 19:16:36.008 [INFO][4458] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:36.068816 env[1741]: 2024-02-09 19:16:36.044 [INFO][4403] k8s.go 591: Teardown processing complete. ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Feb 9 19:16:36.068816 env[1741]: time="2024-02-09T19:16:36.067507337Z" level=info msg="TearDown network for sandbox \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\" successfully" Feb 9 19:16:36.068816 env[1741]: time="2024-02-09T19:16:36.067598850Z" level=info msg="StopPodSandbox for \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\" returns successfully" Feb 9 19:16:36.066408 systemd[1]: run-netns-cni\x2d6d4d6f56\x2d8301\x2d499b\x2d2709\x2df034f6f5d2ec.mount: Deactivated successfully. Feb 9 19:16:36.083124 env[1741]: time="2024-02-09T19:16:36.072567943Z" level=info msg="StopPodSandbox for \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\"" Feb 9 19:16:36.083124 env[1741]: time="2024-02-09T19:16:36.082579080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-szjdz,Uid:9d3adc59-2fa5-4081-acf7-fb99c5b37340,Namespace:calico-system,Attempt:1,}" Feb 9 19:16:36.310470 systemd-networkd[1533]: vxlan.calico: Link UP Feb 9 19:16:36.310485 systemd-networkd[1533]: vxlan.calico: Gained carrier Feb 9 19:16:36.350471 env[1741]: time="2024-02-09T19:16:36.350272352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:36.352905 env[1741]: time="2024-02-09T19:16:36.352828409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:36.353202 env[1741]: time="2024-02-09T19:16:36.353138175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:36.360198 env[1741]: time="2024-02-09T19:16:36.360095352Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1 pid=4571 runtime=io.containerd.runc.v2 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { bpf } for pid=4595 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { bpf } for pid=4595 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { perfmon } for pid=4595 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { perfmon } for pid=4595 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { perfmon } for pid=4595 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { perfmon } for pid=4595 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { perfmon } for pid=4595 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { bpf } for pid=4595 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { bpf } for pid=4595 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit: BPF prog-id=10 op=LOAD Feb 9 19:16:36.370000 audit[4595]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdce37ae8 a2=70 a3=0 items=0 ppid=4258 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:36.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:16:36.370000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { bpf } for pid=4595 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { bpf } for pid=4595 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { perfmon } for pid=4595 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { perfmon } for pid=4595 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { perfmon } for pid=4595 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { perfmon } for pid=4595 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { perfmon } for pid=4595 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { bpf } for pid=4595 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { bpf } for pid=4595 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit: BPF prog-id=11 op=LOAD Feb 9 19:16:36.370000 audit[4595]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdce37ae8 a2=70 a3=4a174c items=0 ppid=4258 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:36.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:16:36.370000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:16:36.370000 audit[4595]: AVC avc: denied { bpf } for pid=4595 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.370000 audit[4595]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffdce37b18 a2=70 a3=32bb673f items=0 ppid=4258 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:36.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:16:36.372000 audit[4595]: AVC avc: denied { bpf } for pid=4595 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.372000 audit[4595]: AVC avc: denied { bpf } for pid=4595 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.372000 audit[4595]: AVC avc: denied { bpf } for pid=4595 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.372000 audit[4595]: AVC avc: denied { perfmon } for pid=4595 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.372000 audit[4595]: AVC avc: denied { perfmon } for pid=4595 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.372000 audit[4595]: AVC avc: denied { perfmon } for pid=4595 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.372000 audit[4595]: AVC avc: denied { perfmon } for pid=4595 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.372000 audit[4595]: AVC avc: denied { perfmon } for pid=4595 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.372000 audit[4595]: AVC avc: denied { bpf } for pid=4595 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.372000 audit[4595]: AVC avc: denied { bpf } for pid=4595 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.372000 audit: BPF prog-id=12 op=LOAD Feb 9 19:16:36.372000 audit[4595]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdce37a68 a2=70 a3=32bb6759 items=0 ppid=4258 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:36.372000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:16:36.395000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.395000 audit[4599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffaf1c1a8 a2=70 a3=0 items=0 ppid=4258 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:36.395000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:16:36.395000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:16:36.395000 audit[4599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffaf1c088 a2=70 a3=2 items=0 ppid=4258 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:36.395000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:16:36.448000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:16:36.555523 env[1741]: time="2024-02-09T19:16:36.553894436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r6shr,Uid:279e36fd-cfd0-4b8d-8437-2765b9919f84,Namespace:kube-system,Attempt:1,} returns sandbox id \"027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d\"" Feb 9 19:16:36.580992 env[1741]: time="2024-02-09T19:16:36.580873899Z" level=info msg="CreateContainer within sandbox \"027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:16:36.748900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2430141199.mount: Deactivated successfully. Feb 9 19:16:36.771807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330621713.mount: Deactivated successfully. Feb 9 19:16:36.795202 env[1741]: time="2024-02-09T19:16:36.795133275Z" level=info msg="CreateContainer within sandbox \"027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fa7c71a3af574969aa07736e89661264cb4a2e3bbaf20ff3919c0d6ed304594f\"" Feb 9 19:16:36.803709 env[1741]: time="2024-02-09T19:16:36.801364901Z" level=info msg="StartContainer for \"fa7c71a3af574969aa07736e89661264cb4a2e3bbaf20ff3919c0d6ed304594f\"" Feb 9 19:16:36.857000 audit[4669]: NETFILTER_CFG table=mangle:111 family=2 entries=19 op=nft_register_chain pid=4669 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:16:36.857000 audit[4669]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6800 a0=3 a1=ffffeaaa47c0 a2=0 a3=ffff9bedcfa8 items=0 ppid=4258 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:36.857000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:16:36.884000 audit[4670]: NETFILTER_CFG table=nat:112 family=2 entries=16 op=nft_register_chain pid=4670 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:16:36.884000 audit[4670]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5188 a0=3 a1=ffffe89b6410 a2=0 a3=ffff8c4f3fa8 items=0 ppid=4258 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:36.884000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:16:36.901000 audit[4668]: NETFILTER_CFG table=raw:113 family=2 entries=19 op=nft_register_chain pid=4668 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:16:36.901000 audit[4668]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6132 a0=3 a1=ffffc25bab00 a2=0 a3=ffff8b6bcfa8 items=0 ppid=4258 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:36.901000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:16:36.913255 env[1741]: time="2024-02-09T19:16:36.913182707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59dd7c78-j47sd,Uid:97d159a3-eb2b-41cd-8d25-e9b1cc42f426,Namespace:calico-system,Attempt:1,} returns sandbox id \"986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1\"" Feb 9 19:16:36.921000 audit[4677]: NETFILTER_CFG table=filter:114 family=2 entries=103 op=nft_register_chain pid=4677 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:16:36.921000 audit[4677]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=54800 a0=3 a1=ffffc3026b80 a2=0 a3=ffff9ecfafa8 items=0 ppid=4258 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:36.921000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:16:36.930082 env[1741]: time="2024-02-09T19:16:36.929997559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 9 19:16:37.089033 systemd-networkd[1533]: cali0fd0eacd73f: Gained IPv6LL Feb 9 19:16:37.109461 env[1741]: 2024-02-09 19:16:36.655 [INFO][4561] k8s.go 578: Cleaning up netns ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Feb 9 19:16:37.109461 env[1741]: 2024-02-09 19:16:36.665 [INFO][4561] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" iface="eth0" netns="/var/run/netns/cni-1ed88143-f521-4a5a-e861-b1dd66955cc4" Feb 9 19:16:37.109461 env[1741]: 2024-02-09 19:16:36.670 [INFO][4561] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" iface="eth0" netns="/var/run/netns/cni-1ed88143-f521-4a5a-e861-b1dd66955cc4" Feb 9 19:16:37.109461 env[1741]: 2024-02-09 19:16:36.670 [INFO][4561] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" iface="eth0" netns="/var/run/netns/cni-1ed88143-f521-4a5a-e861-b1dd66955cc4" Feb 9 19:16:37.109461 env[1741]: 2024-02-09 19:16:36.673 [INFO][4561] k8s.go 585: Releasing IP address(es) ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Feb 9 19:16:37.109461 env[1741]: 2024-02-09 19:16:36.682 [INFO][4561] utils.go 188: Calico CNI releasing IP address ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Feb 9 19:16:37.109461 env[1741]: 2024-02-09 19:16:37.030 [INFO][4644] ipam_plugin.go 415: Releasing address using handleID ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" HandleID="k8s-pod-network.e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:37.109461 env[1741]: 2024-02-09 19:16:37.030 [INFO][4644] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:37.109461 env[1741]: 2024-02-09 19:16:37.030 [INFO][4644] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:37.109461 env[1741]: 2024-02-09 19:16:37.080 [WARNING][4644] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" HandleID="k8s-pod-network.e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:37.109461 env[1741]: 2024-02-09 19:16:37.080 [INFO][4644] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" HandleID="k8s-pod-network.e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:37.109461 env[1741]: 2024-02-09 19:16:37.104 [INFO][4644] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:37.109461 env[1741]: 2024-02-09 19:16:37.106 [INFO][4561] k8s.go 591: Teardown processing complete. ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Feb 9 19:16:37.112940 env[1741]: time="2024-02-09T19:16:37.112884089Z" level=info msg="TearDown network for sandbox \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\" successfully" Feb 9 19:16:37.113161 env[1741]: time="2024-02-09T19:16:37.113123725Z" level=info msg="StopPodSandbox for \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\" returns successfully" Feb 9 19:16:37.117820 env[1741]: time="2024-02-09T19:16:37.114304436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-qfmtf,Uid:e626e56e-c45b-4aee-b1a6-26ae1953bb79,Namespace:kube-system,Attempt:1,}" Feb 9 19:16:37.152987 systemd-networkd[1533]: cali01faa0bbf5e: Gained IPv6LL Feb 9 19:16:37.284122 (udev-worker)[4615]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:37.301839 systemd-networkd[1533]: calif0cdb34df91: Link UP Feb 9 19:16:37.305451 systemd-networkd[1533]: calif0cdb34df91: Gained carrier Feb 9 19:16:37.322016 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif0cdb34df91: link becomes ready Feb 9 19:16:37.366245 env[1741]: time="2024-02-09T19:16:37.366107765Z" level=info msg="StartContainer for \"fa7c71a3af574969aa07736e89661264cb4a2e3bbaf20ff3919c0d6ed304594f\" returns successfully" Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:36.552 [INFO][4534] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0 csi-node-driver- calico-system 9d3adc59-2fa5-4081-acf7-fb99c5b37340 696 0 2024-02-09 19:16:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-18-155 csi-node-driver-szjdz eth0 default [] [] [kns.calico-system ksa.calico-system.default] calif0cdb34df91 [] []}} ContainerID="031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" Namespace="calico-system" Pod="csi-node-driver-szjdz" WorkloadEndpoint="ip--172--31--18--155-k8s-csi--node--driver--szjdz-" Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:36.553 [INFO][4534] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" Namespace="calico-system" Pod="csi-node-driver-szjdz" WorkloadEndpoint="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.038 [INFO][4642] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" HandleID="k8s-pod-network.031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" Workload="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.132 [INFO][4642] ipam_plugin.go 268: Auto assigning IP ContainerID="031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" HandleID="k8s-pod-network.031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" Workload="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d520), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-155", "pod":"csi-node-driver-szjdz", "timestamp":"2024-02-09 19:16:37.038688052 +0000 UTC"}, Hostname:"ip-172-31-18-155", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.132 [INFO][4642] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.132 [INFO][4642] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.132 [INFO][4642] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-155' Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.154 [INFO][4642] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" host="ip-172-31-18-155" Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.175 [INFO][4642] ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-155" Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.198 [INFO][4642] ipam.go 489: Trying affinity for 192.168.95.64/26 host="ip-172-31-18-155" Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.207 [INFO][4642] ipam.go 155: Attempting to load block cidr=192.168.95.64/26 host="ip-172-31-18-155" Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.219 [INFO][4642] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ip-172-31-18-155" Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.222 [INFO][4642] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" host="ip-172-31-18-155" Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.228 [INFO][4642] ipam.go 1682: Creating new handle: k8s-pod-network.031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12 Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.239 [INFO][4642] ipam.go 1203: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" host="ip-172-31-18-155" Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.263 [INFO][4642] ipam.go 1216: Successfully claimed IPs: [192.168.95.67/26] block=192.168.95.64/26 handle="k8s-pod-network.031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" host="ip-172-31-18-155" Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.265 [INFO][4642] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.95.67/26] handle="k8s-pod-network.031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" host="ip-172-31-18-155" Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.265 [INFO][4642] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:37.393574 env[1741]: 2024-02-09 19:16:37.265 [INFO][4642] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.95.67/26] IPv6=[] ContainerID="031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" HandleID="k8s-pod-network.031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" Workload="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:37.395124 env[1741]: 2024-02-09 19:16:37.279 [INFO][4534] k8s.go 385: Populated endpoint ContainerID="031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" Namespace="calico-system" Pod="csi-node-driver-szjdz" WorkloadEndpoint="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d3adc59-2fa5-4081-acf7-fb99c5b37340", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 16, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"", Pod:"csi-node-driver-szjdz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.95.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif0cdb34df91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:37.395124 env[1741]: 2024-02-09 19:16:37.279 [INFO][4534] k8s.go 386: Calico CNI using IPs: [192.168.95.67/32] ContainerID="031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" Namespace="calico-system" Pod="csi-node-driver-szjdz" WorkloadEndpoint="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:37.395124 env[1741]: 2024-02-09 19:16:37.279 [INFO][4534] dataplane_linux.go 68: Setting the host side veth name to calif0cdb34df91 ContainerID="031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" Namespace="calico-system" Pod="csi-node-driver-szjdz" WorkloadEndpoint="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:37.395124 env[1741]: 2024-02-09 19:16:37.302 [INFO][4534] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" Namespace="calico-system" Pod="csi-node-driver-szjdz" WorkloadEndpoint="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:37.395124 env[1741]: 2024-02-09 19:16:37.325 [INFO][4534] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" Namespace="calico-system" Pod="csi-node-driver-szjdz" WorkloadEndpoint="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d3adc59-2fa5-4081-acf7-fb99c5b37340", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 16, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12", Pod:"csi-node-driver-szjdz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.95.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif0cdb34df91", MAC:"ce:e7:fe:3a:85:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:37.395124 env[1741]: 2024-02-09 19:16:37.354 [INFO][4534] k8s.go 491: Wrote updated endpoint to datastore ContainerID="031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12" Namespace="calico-system" Pod="csi-node-driver-szjdz" WorkloadEndpoint="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:37.428177 kubelet[3035]: I0209 19:16:37.428105 3035 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-r6shr" podStartSLOduration=40.428051428 pod.CreationTimestamp="2024-02-09 19:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:37.427360225 +0000 UTC m=+53.011134423" watchObservedRunningTime="2024-02-09 19:16:37.428051428 +0000 UTC m=+53.011825626" Feb 9 19:16:37.469791 env[1741]: time="2024-02-09T19:16:37.465967470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:37.471325 env[1741]: time="2024-02-09T19:16:37.466031062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:37.471325 env[1741]: time="2024-02-09T19:16:37.470180347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:37.471325 env[1741]: time="2024-02-09T19:16:37.470613485Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12 pid=4739 runtime=io.containerd.runc.v2 Feb 9 19:16:37.547674 systemd[1]: run-netns-cni\x2d1ed88143\x2df521\x2d4a5a\x2de861\x2db1dd66955cc4.mount: Deactivated successfully. Feb 9 19:16:37.661563 systemd[1]: run-containerd-runc-k8s.io-031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12-runc.ztxn9q.mount: Deactivated successfully. Feb 9 19:16:37.753000 audit[4780]: NETFILTER_CFG table=filter:115 family=2 entries=44 op=nft_register_chain pid=4780 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:16:37.753000 audit[4780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22360 a0=3 a1=ffffc50fedb0 a2=0 a3=ffff9cb20fa8 items=0 ppid=4258 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:37.753000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:16:37.810880 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali614cc274273: link becomes ready Feb 9 19:16:37.811190 systemd-networkd[1533]: cali614cc274273: Link UP Feb 9 19:16:37.811620 systemd-networkd[1533]: cali614cc274273: Gained carrier Feb 9 19:16:37.860699 systemd-networkd[1533]: vxlan.calico: Gained IPv6LL Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.480 [INFO][4699] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0 coredns-787d4945fb- kube-system e626e56e-c45b-4aee-b1a6-26ae1953bb79 707 0 2024-02-09 19:15:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-155 coredns-787d4945fb-qfmtf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali614cc274273 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" Namespace="kube-system" Pod="coredns-787d4945fb-qfmtf" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-" Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.480 [INFO][4699] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" Namespace="kube-system" Pod="coredns-787d4945fb-qfmtf" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.710 [INFO][4759] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" HandleID="k8s-pod-network.32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.740 [INFO][4759] ipam_plugin.go 268: Auto assigning IP ContainerID="32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" HandleID="k8s-pod-network.32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c050), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-155", "pod":"coredns-787d4945fb-qfmtf", "timestamp":"2024-02-09 19:16:37.710226413 +0000 UTC"}, Hostname:"ip-172-31-18-155", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.740 [INFO][4759] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.740 [INFO][4759] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.740 [INFO][4759] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-155' Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.744 [INFO][4759] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" host="ip-172-31-18-155" Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.755 [INFO][4759] ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-155" Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.767 [INFO][4759] ipam.go 489: Trying affinity for 192.168.95.64/26 host="ip-172-31-18-155" Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.771 [INFO][4759] ipam.go 155: Attempting to load block cidr=192.168.95.64/26 host="ip-172-31-18-155" Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.776 [INFO][4759] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ip-172-31-18-155" Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.776 [INFO][4759] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" host="ip-172-31-18-155" Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.779 [INFO][4759] ipam.go 1682: Creating new handle: k8s-pod-network.32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835 Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.786 [INFO][4759] ipam.go 1203: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" host="ip-172-31-18-155" Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.795 [INFO][4759] ipam.go 1216: Successfully claimed IPs: [192.168.95.68/26] block=192.168.95.64/26 handle="k8s-pod-network.32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" host="ip-172-31-18-155" Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.795 [INFO][4759] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.95.68/26] handle="k8s-pod-network.32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" host="ip-172-31-18-155" Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.795 [INFO][4759] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:37.874415 env[1741]: 2024-02-09 19:16:37.795 [INFO][4759] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.95.68/26] IPv6=[] ContainerID="32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" HandleID="k8s-pod-network.32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:37.879937 env[1741]: 2024-02-09 19:16:37.798 [INFO][4699] k8s.go 385: Populated endpoint ContainerID="32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" Namespace="kube-system" Pod="coredns-787d4945fb-qfmtf" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"e626e56e-c45b-4aee-b1a6-26ae1953bb79", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"", Pod:"coredns-787d4945fb-qfmtf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali614cc274273", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:37.879937 env[1741]: 2024-02-09 19:16:37.799 [INFO][4699] k8s.go 386: Calico CNI using IPs: [192.168.95.68/32] ContainerID="32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" Namespace="kube-system" Pod="coredns-787d4945fb-qfmtf" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:37.879937 env[1741]: 2024-02-09 19:16:37.799 [INFO][4699] dataplane_linux.go 68: Setting the host side veth name to cali614cc274273 ContainerID="32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" Namespace="kube-system" Pod="coredns-787d4945fb-qfmtf" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:37.879937 env[1741]: 2024-02-09 19:16:37.807 [INFO][4699] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" Namespace="kube-system" Pod="coredns-787d4945fb-qfmtf" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:37.879937 env[1741]: 2024-02-09 19:16:37.808 [INFO][4699] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" Namespace="kube-system" Pod="coredns-787d4945fb-qfmtf" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"e626e56e-c45b-4aee-b1a6-26ae1953bb79", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835", Pod:"coredns-787d4945fb-qfmtf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali614cc274273", MAC:"f6:80:c9:0d:51:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:37.879937 env[1741]: 2024-02-09 19:16:37.838 [INFO][4699] k8s.go 491: Wrote updated endpoint to datastore ContainerID="32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835" Namespace="kube-system" Pod="coredns-787d4945fb-qfmtf" WorkloadEndpoint="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:37.937654 env[1741]: time="2024-02-09T19:16:37.937597587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-szjdz,Uid:9d3adc59-2fa5-4081-acf7-fb99c5b37340,Namespace:calico-system,Attempt:1,} returns sandbox id \"031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12\"" Feb 9 19:16:37.964606 env[1741]: time="2024-02-09T19:16:37.961532628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:37.964606 env[1741]: time="2024-02-09T19:16:37.961608640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:37.964606 env[1741]: time="2024-02-09T19:16:37.961641276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:37.964606 env[1741]: time="2024-02-09T19:16:37.961922727Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835 pid=4829 runtime=io.containerd.runc.v2 Feb 9 19:16:37.982000 audit[4830]: NETFILTER_CFG table=filter:116 family=2 entries=34 op=nft_register_chain pid=4830 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:16:37.982000 audit[4830]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=17884 a0=3 a1=ffffda4c87c0 a2=0 a3=ffffbbf31fa8 items=0 ppid=4258 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:37.982000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:16:38.019000 audit[4851]: NETFILTER_CFG table=filter:117 family=2 entries=12 op=nft_register_rule pid=4851 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:38.019000 audit[4851]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=fffff4167e20 a2=0 a3=ffff9ed4b6c0 items=0 ppid=3237 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:38.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:38.023000 audit[4851]: NETFILTER_CFG table=nat:118 family=2 entries=30 op=nft_register_rule pid=4851 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:38.023000 audit[4851]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=fffff4167e20 a2=0 a3=ffff9ed4b6c0 items=0 ppid=3237 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:38.023000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:38.131355 env[1741]: time="2024-02-09T19:16:38.131281033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-qfmtf,Uid:e626e56e-c45b-4aee-b1a6-26ae1953bb79,Namespace:kube-system,Attempt:1,} returns sandbox id \"32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835\"" Feb 9 19:16:38.139393 env[1741]: time="2024-02-09T19:16:38.139323772Z" level=info msg="CreateContainer within sandbox \"32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:16:38.167061 env[1741]: time="2024-02-09T19:16:38.166984421Z" level=info msg="CreateContainer within sandbox \"32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c621ccf40145de5e2b2469ccd92f102c09d4750a062f787d32fc48a04321407\"" Feb 9 19:16:38.168580 env[1741]: time="2024-02-09T19:16:38.168514757Z" level=info msg="StartContainer for \"6c621ccf40145de5e2b2469ccd92f102c09d4750a062f787d32fc48a04321407\"" Feb 9 19:16:38.399573 env[1741]: time="2024-02-09T19:16:38.393703470Z" level=info msg="StartContainer for \"6c621ccf40145de5e2b2469ccd92f102c09d4750a062f787d32fc48a04321407\" returns successfully" Feb 9 19:16:38.472716 kubelet[3035]: I0209 19:16:38.472649 3035 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-qfmtf" podStartSLOduration=41.472562961 pod.CreationTimestamp="2024-02-09 19:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:38.470097508 +0000 UTC m=+54.053871694" watchObservedRunningTime="2024-02-09 19:16:38.472562961 +0000 UTC m=+54.056337135" Feb 9 19:16:38.539204 systemd[1]: run-containerd-runc-k8s.io-32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835-runc.suTgeI.mount: Deactivated successfully. Feb 9 19:16:38.635000 audit[4930]: NETFILTER_CFG table=filter:119 family=2 entries=12 op=nft_register_rule pid=4930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:38.635000 audit[4930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffe079ae80 a2=0 a3=ffff93d9f6c0 items=0 ppid=3237 pid=4930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:38.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:38.637000 audit[4930]: NETFILTER_CFG table=nat:120 family=2 entries=30 op=nft_register_rule pid=4930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:38.637000 audit[4930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffe079ae80 a2=0 a3=ffff93d9f6c0 items=0 ppid=3237 pid=4930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:38.637000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:38.856000 audit[4956]: NETFILTER_CFG table=filter:121 family=2 entries=9 op=nft_register_rule pid=4956 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:38.856000 audit[4956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffebc219a0 a2=0 a3=ffff8586a6c0 items=0 ppid=3237 pid=4956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:38.856000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:38.873000 audit[4956]: NETFILTER_CFG table=nat:122 family=2 entries=51 op=nft_register_chain pid=4956 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:38.873000 audit[4956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19324 a0=3 a1=ffffebc219a0 a2=0 a3=ffff8586a6c0 items=0 ppid=3237 pid=4956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:38.873000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:39.265019 systemd-networkd[1533]: calif0cdb34df91: Gained IPv6LL Feb 9 19:16:39.584961 systemd-networkd[1533]: cali614cc274273: Gained IPv6LL Feb 9 19:16:40.191666 env[1741]: time="2024-02-09T19:16:40.191610461Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:40.196800 env[1741]: time="2024-02-09T19:16:40.196725703Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:40.201216 env[1741]: time="2024-02-09T19:16:40.201161092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:40.205582 env[1741]: time="2024-02-09T19:16:40.205523972Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:40.208336 env[1741]: time="2024-02-09T19:16:40.207200987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8\"" Feb 9 19:16:40.212790 env[1741]: time="2024-02-09T19:16:40.210151893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 19:16:40.244955 env[1741]: time="2024-02-09T19:16:40.244899392Z" level=info msg="CreateContainer within sandbox \"986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 9 19:16:40.282855 env[1741]: time="2024-02-09T19:16:40.282789129Z" level=info msg="CreateContainer within sandbox \"986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c6f4c3569b2a573f9be026a6ae8f86427ea67b4b025efac18e6a297cbc941376\"" Feb 9 19:16:40.284076 env[1741]: time="2024-02-09T19:16:40.284017161Z" level=info msg="StartContainer for \"c6f4c3569b2a573f9be026a6ae8f86427ea67b4b025efac18e6a297cbc941376\"" Feb 9 19:16:40.425183 kernel: kauditd_printk_skb: 110 callbacks suppressed Feb 9 19:16:40.425385 kernel: audit: type=1325 audit(1707506200.415:312): table=filter:123 family=2 entries=6 op=nft_register_rule pid=5008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:40.415000 audit[5008]: NETFILTER_CFG table=filter:123 family=2 entries=6 op=nft_register_rule pid=5008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:40.415000 audit[5008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd3a07820 a2=0 a3=ffff835c36c0 items=0 ppid=3237 pid=5008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:40.437470 kernel: audit: type=1300 audit(1707506200.415:312): arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd3a07820 a2=0 a3=ffff835c36c0 items=0 ppid=3237 pid=5008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:40.415000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:40.443808 kernel: audit: type=1327 audit(1707506200.415:312): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:40.485000 audit[5008]: NETFILTER_CFG table=nat:124 family=2 entries=72 op=nft_register_chain pid=5008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:40.485000 audit[5008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd3a07820 a2=0 a3=ffff835c36c0 items=0 ppid=3237 pid=5008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:40.508103 kernel: audit: type=1325 audit(1707506200.485:313): table=nat:124 family=2 entries=72 op=nft_register_chain pid=5008 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:16:40.508286 kernel: audit: type=1300 audit(1707506200.485:313): arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd3a07820 a2=0 a3=ffff835c36c0 items=0 ppid=3237 pid=5008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:40.485000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:40.516177 kernel: audit: type=1327 audit(1707506200.485:313): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:16:40.607071 env[1741]: time="2024-02-09T19:16:40.607007585Z" level=info msg="StartContainer for \"c6f4c3569b2a573f9be026a6ae8f86427ea67b4b025efac18e6a297cbc941376\" returns successfully" Feb 9 19:16:41.532286 systemd[1]: run-containerd-runc-k8s.io-c6f4c3569b2a573f9be026a6ae8f86427ea67b4b025efac18e6a297cbc941376-runc.nALc2V.mount: Deactivated successfully. Feb 9 19:16:41.585422 kubelet[3035]: I0209 19:16:41.585326 3035 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59dd7c78-j47sd" podStartSLOduration=-9.223372000269516e+09 pod.CreationTimestamp="2024-02-09 19:16:05 +0000 UTC" firstStartedPulling="2024-02-09 19:16:36.929130678 +0000 UTC m=+52.512904852" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:41.561323724 +0000 UTC m=+57.145097910" watchObservedRunningTime="2024-02-09 19:16:41.585260492 +0000 UTC m=+57.169034690" Feb 9 19:16:42.031914 env[1741]: time="2024-02-09T19:16:42.031840095Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:42.037684 env[1741]: time="2024-02-09T19:16:42.037615337Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:42.042506 env[1741]: time="2024-02-09T19:16:42.042447078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:42.046994 env[1741]: time="2024-02-09T19:16:42.046940387Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:42.049718 env[1741]: time="2024-02-09T19:16:42.048508255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8\"" Feb 9 19:16:42.053662 env[1741]: time="2024-02-09T19:16:42.053598945Z" level=info msg="CreateContainer within sandbox \"031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 19:16:42.087817 env[1741]: time="2024-02-09T19:16:42.087726117Z" level=info msg="CreateContainer within sandbox \"031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b98bcee72c22eb9d54c131dbc526ac7104b5406973bc0ff99817dd9a4c723a89\"" Feb 9 19:16:42.089297 env[1741]: time="2024-02-09T19:16:42.089173503Z" level=info msg="StartContainer for \"b98bcee72c22eb9d54c131dbc526ac7104b5406973bc0ff99817dd9a4c723a89\"" Feb 9 19:16:42.340548 env[1741]: time="2024-02-09T19:16:42.340403252Z" level=info msg="StartContainer for \"b98bcee72c22eb9d54c131dbc526ac7104b5406973bc0ff99817dd9a4c723a89\" returns successfully" Feb 9 19:16:42.343431 env[1741]: time="2024-02-09T19:16:42.343373114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 19:16:44.036734 env[1741]: time="2024-02-09T19:16:44.036659482Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:44.041068 env[1741]: time="2024-02-09T19:16:44.040994343Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:44.044038 env[1741]: time="2024-02-09T19:16:44.043981047Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:44.046717 env[1741]: time="2024-02-09T19:16:44.046650471Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:44.047870 env[1741]: time="2024-02-09T19:16:44.047724834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43\"" Feb 9 19:16:44.055185 env[1741]: time="2024-02-09T19:16:44.055119437Z" level=info msg="CreateContainer within sandbox \"031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 19:16:44.082922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount696438806.mount: Deactivated successfully. Feb 9 19:16:44.089214 env[1741]: time="2024-02-09T19:16:44.089149033Z" level=info msg="CreateContainer within sandbox \"031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b0c9e15a403a9ae9d96aaa41f4997426246083db32c92929b88869fd6c11cf2b\"" Feb 9 19:16:44.093205 env[1741]: time="2024-02-09T19:16:44.091092165Z" level=info msg="StartContainer for \"b0c9e15a403a9ae9d96aaa41f4997426246083db32c92929b88869fd6c11cf2b\"" Feb 9 19:16:44.242217 env[1741]: time="2024-02-09T19:16:44.242104509Z" level=info msg="StartContainer for \"b0c9e15a403a9ae9d96aaa41f4997426246083db32c92929b88869fd6c11cf2b\" returns successfully" Feb 9 19:16:44.500784 kubelet[3035]: I0209 19:16:44.500706 3035 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-szjdz" podStartSLOduration=-9.223371997354153e+09 pod.CreationTimestamp="2024-02-09 19:16:05 +0000 UTC" firstStartedPulling="2024-02-09 19:16:37.9553865 +0000 UTC m=+53.539160662" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:44.500361035 +0000 UTC m=+60.084135221" watchObservedRunningTime="2024-02-09 19:16:44.500623503 +0000 UTC m=+60.084397701" Feb 9 19:16:44.691501 env[1741]: time="2024-02-09T19:16:44.691429407Z" level=info msg="StopPodSandbox for \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\"" Feb 9 19:16:44.824630 env[1741]: 2024-02-09 19:16:44.758 [WARNING][5131] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d3adc59-2fa5-4081-acf7-fb99c5b37340", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 16, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12", Pod:"csi-node-driver-szjdz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.95.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif0cdb34df91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:44.824630 env[1741]: 2024-02-09 19:16:44.758 [INFO][5131] k8s.go 578: Cleaning up netns ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Feb 9 19:16:44.824630 env[1741]: 2024-02-09 19:16:44.758 [INFO][5131] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" iface="eth0" netns="" Feb 9 19:16:44.824630 env[1741]: 2024-02-09 19:16:44.758 [INFO][5131] k8s.go 585: Releasing IP address(es) ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Feb 9 19:16:44.824630 env[1741]: 2024-02-09 19:16:44.758 [INFO][5131] utils.go 188: Calico CNI releasing IP address ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Feb 9 19:16:44.824630 env[1741]: 2024-02-09 19:16:44.801 [INFO][5138] ipam_plugin.go 415: Releasing address using handleID ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" HandleID="k8s-pod-network.ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Workload="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:44.824630 env[1741]: 2024-02-09 19:16:44.801 [INFO][5138] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:44.824630 env[1741]: 2024-02-09 19:16:44.802 [INFO][5138] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:44.824630 env[1741]: 2024-02-09 19:16:44.816 [WARNING][5138] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" HandleID="k8s-pod-network.ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Workload="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:44.824630 env[1741]: 2024-02-09 19:16:44.817 [INFO][5138] ipam_plugin.go 443: Releasing address using workloadID ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" HandleID="k8s-pod-network.ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Workload="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:44.824630 env[1741]: 2024-02-09 19:16:44.819 [INFO][5138] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:44.824630 env[1741]: 2024-02-09 19:16:44.821 [INFO][5131] k8s.go 591: Teardown processing complete. ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Feb 9 19:16:44.826058 env[1741]: time="2024-02-09T19:16:44.826004468Z" level=info msg="TearDown network for sandbox \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\" successfully" Feb 9 19:16:44.826205 env[1741]: time="2024-02-09T19:16:44.826171568Z" level=info msg="StopPodSandbox for \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\" returns successfully" Feb 9 19:16:44.827403 env[1741]: time="2024-02-09T19:16:44.827348858Z" level=info msg="RemovePodSandbox for \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\"" Feb 9 19:16:44.827976 env[1741]: time="2024-02-09T19:16:44.827902304Z" level=info msg="Forcibly stopping sandbox \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\"" Feb 9 19:16:44.970110 env[1741]: 2024-02-09 19:16:44.895 [WARNING][5157] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d3adc59-2fa5-4081-acf7-fb99c5b37340", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 16, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"031adad8ae782ab710f8a5ed498b6d06538a75bffed29846845a760ffed10c12", Pod:"csi-node-driver-szjdz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.95.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif0cdb34df91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:44.970110 env[1741]: 2024-02-09 19:16:44.896 [INFO][5157] k8s.go 578: Cleaning up netns ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Feb 9 19:16:44.970110 env[1741]: 2024-02-09 19:16:44.896 [INFO][5157] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" iface="eth0" netns="" Feb 9 19:16:44.970110 env[1741]: 2024-02-09 19:16:44.899 [INFO][5157] k8s.go 585: Releasing IP address(es) ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Feb 9 19:16:44.970110 env[1741]: 2024-02-09 19:16:44.899 [INFO][5157] utils.go 188: Calico CNI releasing IP address ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Feb 9 19:16:44.970110 env[1741]: 2024-02-09 19:16:44.938 [INFO][5164] ipam_plugin.go 415: Releasing address using handleID ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" HandleID="k8s-pod-network.ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Workload="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:44.970110 env[1741]: 2024-02-09 19:16:44.938 [INFO][5164] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:44.970110 env[1741]: 2024-02-09 19:16:44.938 [INFO][5164] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:44.970110 env[1741]: 2024-02-09 19:16:44.962 [WARNING][5164] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" HandleID="k8s-pod-network.ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Workload="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:44.970110 env[1741]: 2024-02-09 19:16:44.962 [INFO][5164] ipam_plugin.go 443: Releasing address using workloadID ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" HandleID="k8s-pod-network.ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Workload="ip--172--31--18--155-k8s-csi--node--driver--szjdz-eth0" Feb 9 19:16:44.970110 env[1741]: 2024-02-09 19:16:44.964 [INFO][5164] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:44.970110 env[1741]: 2024-02-09 19:16:44.967 [INFO][5157] k8s.go 591: Teardown processing complete. ContainerID="ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23" Feb 9 19:16:44.977577 env[1741]: time="2024-02-09T19:16:44.973901202Z" level=info msg="TearDown network for sandbox \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\" successfully" Feb 9 19:16:44.979919 env[1741]: time="2024-02-09T19:16:44.979834173Z" level=info msg="RemovePodSandbox \"ca08324b072997de8702913871598afb395d400be426e4343aab8bfdb3a4ea23\" returns successfully" Feb 9 19:16:44.981234 env[1741]: time="2024-02-09T19:16:44.981157012Z" level=info msg="StopPodSandbox for \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\"" Feb 9 19:16:45.072112 systemd[1]: run-containerd-runc-k8s.io-b0c9e15a403a9ae9d96aaa41f4997426246083db32c92929b88869fd6c11cf2b-runc.Bblmpi.mount: Deactivated successfully. Feb 9 19:16:45.156776 env[1741]: 2024-02-09 19:16:45.056 [WARNING][5184] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"279e36fd-cfd0-4b8d-8437-2765b9919f84", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d", Pod:"coredns-787d4945fb-r6shr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01faa0bbf5e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:45.156776 env[1741]: 2024-02-09 19:16:45.056 [INFO][5184] k8s.go 578: Cleaning up netns ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Feb 9 19:16:45.156776 env[1741]: 2024-02-09 19:16:45.056 [INFO][5184] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" iface="eth0" netns="" Feb 9 19:16:45.156776 env[1741]: 2024-02-09 19:16:45.056 [INFO][5184] k8s.go 585: Releasing IP address(es) ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Feb 9 19:16:45.156776 env[1741]: 2024-02-09 19:16:45.056 [INFO][5184] utils.go 188: Calico CNI releasing IP address ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Feb 9 19:16:45.156776 env[1741]: 2024-02-09 19:16:45.133 [INFO][5190] ipam_plugin.go 415: Releasing address using handleID ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" HandleID="k8s-pod-network.7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:45.156776 env[1741]: 2024-02-09 19:16:45.134 [INFO][5190] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:45.156776 env[1741]: 2024-02-09 19:16:45.134 [INFO][5190] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:45.156776 env[1741]: 2024-02-09 19:16:45.148 [WARNING][5190] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" HandleID="k8s-pod-network.7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:45.156776 env[1741]: 2024-02-09 19:16:45.148 [INFO][5190] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" HandleID="k8s-pod-network.7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:45.156776 env[1741]: 2024-02-09 19:16:45.150 [INFO][5190] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:45.156776 env[1741]: 2024-02-09 19:16:45.152 [INFO][5184] k8s.go 591: Teardown processing complete. ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Feb 9 19:16:45.156776 env[1741]: time="2024-02-09T19:16:45.156121020Z" level=info msg="TearDown network for sandbox \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\" successfully" Feb 9 19:16:45.156776 env[1741]: time="2024-02-09T19:16:45.156176588Z" level=info msg="StopPodSandbox for \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\" returns successfully" Feb 9 19:16:45.158322 env[1741]: time="2024-02-09T19:16:45.157707559Z" level=info msg="RemovePodSandbox for \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\"" Feb 9 19:16:45.158322 env[1741]: time="2024-02-09T19:16:45.157857032Z" level=info msg="Forcibly stopping sandbox \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\"" Feb 9 19:16:45.170859 kubelet[3035]: I0209 19:16:45.170217 3035 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 19:16:45.170859 kubelet[3035]: I0209 19:16:45.170261 3035 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 19:16:45.342200 env[1741]: 2024-02-09 19:16:45.264 [WARNING][5209] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"279e36fd-cfd0-4b8d-8437-2765b9919f84", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"027fbb37873650f2213da3c7b5ee1705470dfa7c9007f8bbfbe13882ca72093d", Pod:"coredns-787d4945fb-r6shr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01faa0bbf5e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:45.342200 env[1741]: 2024-02-09 19:16:45.268 [INFO][5209] k8s.go 578: Cleaning up netns ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Feb 9 19:16:45.342200 env[1741]: 2024-02-09 19:16:45.268 [INFO][5209] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" iface="eth0" netns="" Feb 9 19:16:45.342200 env[1741]: 2024-02-09 19:16:45.268 [INFO][5209] k8s.go 585: Releasing IP address(es) ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Feb 9 19:16:45.342200 env[1741]: 2024-02-09 19:16:45.268 [INFO][5209] utils.go 188: Calico CNI releasing IP address ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Feb 9 19:16:45.342200 env[1741]: 2024-02-09 19:16:45.318 [INFO][5216] ipam_plugin.go 415: Releasing address using handleID ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" HandleID="k8s-pod-network.7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:45.342200 env[1741]: 2024-02-09 19:16:45.318 [INFO][5216] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:45.342200 env[1741]: 2024-02-09 19:16:45.318 [INFO][5216] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:45.342200 env[1741]: 2024-02-09 19:16:45.334 [WARNING][5216] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" HandleID="k8s-pod-network.7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:45.342200 env[1741]: 2024-02-09 19:16:45.335 [INFO][5216] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" HandleID="k8s-pod-network.7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--r6shr-eth0" Feb 9 19:16:45.342200 env[1741]: 2024-02-09 19:16:45.337 [INFO][5216] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:45.342200 env[1741]: 2024-02-09 19:16:45.339 [INFO][5209] k8s.go 591: Teardown processing complete. ContainerID="7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b" Feb 9 19:16:45.343333 env[1741]: time="2024-02-09T19:16:45.343279737Z" level=info msg="TearDown network for sandbox \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\" successfully" Feb 9 19:16:45.348629 env[1741]: time="2024-02-09T19:16:45.348568552Z" level=info msg="RemovePodSandbox \"7edc08038a90f8869904f44d91ee060726836a850e723329ceff09e34f42f91b\" returns successfully" Feb 9 19:16:45.349636 env[1741]: time="2024-02-09T19:16:45.349590256Z" level=info msg="StopPodSandbox for \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\"" Feb 9 19:16:45.526687 env[1741]: 2024-02-09 19:16:45.423 [WARNING][5235] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"e626e56e-c45b-4aee-b1a6-26ae1953bb79", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835", Pod:"coredns-787d4945fb-qfmtf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali614cc274273", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:45.526687 env[1741]: 2024-02-09 19:16:45.424 [INFO][5235] k8s.go 578: Cleaning up netns ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Feb 9 19:16:45.526687 env[1741]: 2024-02-09 19:16:45.424 [INFO][5235] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" iface="eth0" netns="" Feb 9 19:16:45.526687 env[1741]: 2024-02-09 19:16:45.424 [INFO][5235] k8s.go 585: Releasing IP address(es) ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Feb 9 19:16:45.526687 env[1741]: 2024-02-09 19:16:45.424 [INFO][5235] utils.go 188: Calico CNI releasing IP address ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Feb 9 19:16:45.526687 env[1741]: 2024-02-09 19:16:45.465 [INFO][5241] ipam_plugin.go 415: Releasing address using handleID ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" HandleID="k8s-pod-network.e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:45.526687 env[1741]: 2024-02-09 19:16:45.466 [INFO][5241] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:45.526687 env[1741]: 2024-02-09 19:16:45.466 [INFO][5241] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:45.526687 env[1741]: 2024-02-09 19:16:45.500 [WARNING][5241] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" HandleID="k8s-pod-network.e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:45.526687 env[1741]: 2024-02-09 19:16:45.500 [INFO][5241] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" HandleID="k8s-pod-network.e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:45.526687 env[1741]: 2024-02-09 19:16:45.503 [INFO][5241] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:45.526687 env[1741]: 2024-02-09 19:16:45.515 [INFO][5235] k8s.go 591: Teardown processing complete. ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Feb 9 19:16:45.528630 env[1741]: time="2024-02-09T19:16:45.528100361Z" level=info msg="TearDown network for sandbox \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\" successfully" Feb 9 19:16:45.528870 env[1741]: time="2024-02-09T19:16:45.528619984Z" level=info msg="StopPodSandbox for \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\" returns successfully" Feb 9 19:16:45.529543 env[1741]: time="2024-02-09T19:16:45.529473291Z" level=info msg="RemovePodSandbox for \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\"" Feb 9 19:16:45.529910 env[1741]: time="2024-02-09T19:16:45.529710359Z" level=info msg="Forcibly stopping sandbox \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\"" Feb 9 19:16:45.688335 env[1741]: 2024-02-09 19:16:45.612 [WARNING][5262] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"e626e56e-c45b-4aee-b1a6-26ae1953bb79", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 15, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"32d8fb2862328c612648ed3149db9e94870231c0d020c7a45c0c594c7119f835", Pod:"coredns-787d4945fb-qfmtf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali614cc274273", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:45.688335 env[1741]: 2024-02-09 19:16:45.612 [INFO][5262] k8s.go 578: Cleaning up netns ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Feb 9 19:16:45.688335 env[1741]: 2024-02-09 19:16:45.612 [INFO][5262] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" iface="eth0" netns="" Feb 9 19:16:45.688335 env[1741]: 2024-02-09 19:16:45.613 [INFO][5262] k8s.go 585: Releasing IP address(es) ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Feb 9 19:16:45.688335 env[1741]: 2024-02-09 19:16:45.613 [INFO][5262] utils.go 188: Calico CNI releasing IP address ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Feb 9 19:16:45.688335 env[1741]: 2024-02-09 19:16:45.665 [INFO][5268] ipam_plugin.go 415: Releasing address using handleID ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" HandleID="k8s-pod-network.e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:45.688335 env[1741]: 2024-02-09 19:16:45.665 [INFO][5268] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:45.688335 env[1741]: 2024-02-09 19:16:45.665 [INFO][5268] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:45.688335 env[1741]: 2024-02-09 19:16:45.681 [WARNING][5268] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" HandleID="k8s-pod-network.e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:45.688335 env[1741]: 2024-02-09 19:16:45.681 [INFO][5268] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" HandleID="k8s-pod-network.e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Workload="ip--172--31--18--155-k8s-coredns--787d4945fb--qfmtf-eth0" Feb 9 19:16:45.688335 env[1741]: 2024-02-09 19:16:45.683 [INFO][5268] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:45.688335 env[1741]: 2024-02-09 19:16:45.685 [INFO][5262] k8s.go 591: Teardown processing complete. ContainerID="e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b" Feb 9 19:16:45.689517 env[1741]: time="2024-02-09T19:16:45.689461565Z" level=info msg="TearDown network for sandbox \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\" successfully" Feb 9 19:16:45.695354 env[1741]: time="2024-02-09T19:16:45.695293065Z" level=info msg="RemovePodSandbox \"e639c499ac0e5ab2bf73e981003aa6c5ca92f6b69870d34ff8ce642c97e5d81b\" returns successfully" Feb 9 19:16:45.696456 env[1741]: time="2024-02-09T19:16:45.696369821Z" level=info msg="StopPodSandbox for \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\"" Feb 9 19:16:45.832377 env[1741]: 2024-02-09 19:16:45.769 [WARNING][5286] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0", GenerateName:"calico-kube-controllers-59dd7c78-", Namespace:"calico-system", SelfLink:"", UID:"97d159a3-eb2b-41cd-8d25-e9b1cc42f426", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 16, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59dd7c78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1", Pod:"calico-kube-controllers-59dd7c78-j47sd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0fd0eacd73f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:45.832377 env[1741]: 2024-02-09 19:16:45.772 [INFO][5286] k8s.go 578: Cleaning up netns ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Feb 9 19:16:45.832377 env[1741]: 2024-02-09 19:16:45.773 [INFO][5286] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" iface="eth0" netns="" Feb 9 19:16:45.832377 env[1741]: 2024-02-09 19:16:45.773 [INFO][5286] k8s.go 585: Releasing IP address(es) ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Feb 9 19:16:45.832377 env[1741]: 2024-02-09 19:16:45.773 [INFO][5286] utils.go 188: Calico CNI releasing IP address ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Feb 9 19:16:45.832377 env[1741]: 2024-02-09 19:16:45.809 [INFO][5292] ipam_plugin.go 415: Releasing address using handleID ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" HandleID="k8s-pod-network.4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Workload="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:45.832377 env[1741]: 2024-02-09 19:16:45.809 [INFO][5292] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:45.832377 env[1741]: 2024-02-09 19:16:45.810 [INFO][5292] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:45.832377 env[1741]: 2024-02-09 19:16:45.824 [WARNING][5292] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" HandleID="k8s-pod-network.4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Workload="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:45.832377 env[1741]: 2024-02-09 19:16:45.824 [INFO][5292] ipam_plugin.go 443: Releasing address using workloadID ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" HandleID="k8s-pod-network.4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Workload="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:45.832377 env[1741]: 2024-02-09 19:16:45.827 [INFO][5292] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:45.832377 env[1741]: 2024-02-09 19:16:45.829 [INFO][5286] k8s.go 591: Teardown processing complete. ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Feb 9 19:16:45.834602 env[1741]: time="2024-02-09T19:16:45.834547008Z" level=info msg="TearDown network for sandbox \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\" successfully" Feb 9 19:16:45.834792 env[1741]: time="2024-02-09T19:16:45.834737290Z" level=info msg="StopPodSandbox for \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\" returns successfully" Feb 9 19:16:45.835854 env[1741]: time="2024-02-09T19:16:45.835671412Z" level=info msg="RemovePodSandbox for \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\"" Feb 9 19:16:45.835854 env[1741]: time="2024-02-09T19:16:45.835794235Z" level=info msg="Forcibly stopping sandbox \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\"" Feb 9 19:16:45.988018 env[1741]: 2024-02-09 19:16:45.912 [WARNING][5310] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0", GenerateName:"calico-kube-controllers-59dd7c78-", Namespace:"calico-system", SelfLink:"", UID:"97d159a3-eb2b-41cd-8d25-e9b1cc42f426", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 16, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59dd7c78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"986d5c1b5388380fb30c75c948b6fdc148ddcaa0cb685eef909395232c3888c1", Pod:"calico-kube-controllers-59dd7c78-j47sd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0fd0eacd73f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:16:45.988018 env[1741]: 2024-02-09 19:16:45.912 [INFO][5310] k8s.go 578: Cleaning up netns ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Feb 9 19:16:45.988018 env[1741]: 2024-02-09 19:16:45.912 [INFO][5310] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" iface="eth0" netns="" Feb 9 19:16:45.988018 env[1741]: 2024-02-09 19:16:45.912 [INFO][5310] k8s.go 585: Releasing IP address(es) ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Feb 9 19:16:45.988018 env[1741]: 2024-02-09 19:16:45.912 [INFO][5310] utils.go 188: Calico CNI releasing IP address ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Feb 9 19:16:45.988018 env[1741]: 2024-02-09 19:16:45.966 [INFO][5316] ipam_plugin.go 415: Releasing address using handleID ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" HandleID="k8s-pod-network.4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Workload="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:45.988018 env[1741]: 2024-02-09 19:16:45.966 [INFO][5316] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:16:45.988018 env[1741]: 2024-02-09 19:16:45.966 [INFO][5316] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:16:45.988018 env[1741]: 2024-02-09 19:16:45.980 [WARNING][5316] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" HandleID="k8s-pod-network.4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Workload="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:45.988018 env[1741]: 2024-02-09 19:16:45.980 [INFO][5316] ipam_plugin.go 443: Releasing address using workloadID ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" HandleID="k8s-pod-network.4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Workload="ip--172--31--18--155-k8s-calico--kube--controllers--59dd7c78--j47sd-eth0" Feb 9 19:16:45.988018 env[1741]: 2024-02-09 19:16:45.982 [INFO][5316] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:16:45.988018 env[1741]: 2024-02-09 19:16:45.985 [INFO][5310] k8s.go 591: Teardown processing complete. ContainerID="4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7" Feb 9 19:16:45.989362 env[1741]: time="2024-02-09T19:16:45.989308874Z" level=info msg="TearDown network for sandbox \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\" successfully" Feb 9 19:16:45.995177 env[1741]: time="2024-02-09T19:16:45.995070479Z" level=info msg="RemovePodSandbox \"4412a918ab746d3e2fe21c48fc996ac0e7d33199ac73b0a0cb192af447afe2f7\" returns successfully" Feb 9 19:16:48.108118 systemd[1]: Started sshd@7-172.31.18.155:22-147.75.109.163:50620.service. Feb 9 19:16:48.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.18.155:22-147.75.109.163:50620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:48.118875 kernel: audit: type=1130 audit(1707506208.108:314): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.18.155:22-147.75.109.163:50620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:48.295000 audit[5334]: USER_ACCT pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:48.296288 sshd[5334]: Accepted publickey for core from 147.75.109.163 port 50620 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:48.307831 kernel: audit: type=1101 audit(1707506208.295:315): pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:48.307000 audit[5334]: CRED_ACQ pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:48.309880 sshd[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:48.325971 kernel: audit: type=1103 audit(1707506208.307:316): pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:48.326072 kernel: audit: type=1006 audit(1707506208.307:317): pid=5334 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Feb 9 19:16:48.326116 kernel: audit: type=1300 audit(1707506208.307:317): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd5123f0 a2=3 a3=1 items=0 ppid=1 pid=5334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:48.307000 audit[5334]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd5123f0 a2=3 a3=1 items=0 ppid=1 pid=5334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:48.325453 systemd[1]: Started session-8.scope. Feb 9 19:16:48.328413 systemd-logind[1733]: New session 8 of user core. Feb 9 19:16:48.344309 kernel: audit: type=1327 audit(1707506208.307:317): proctitle=737368643A20636F7265205B707269765D Feb 9 19:16:48.307000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:16:48.339000 audit[5334]: USER_START pid=5334 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:48.358897 kernel: audit: type=1105 audit(1707506208.339:318): pid=5334 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:48.359057 kernel: audit: type=1103 audit(1707506208.342:319): pid=5337 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:48.342000 audit[5337]: CRED_ACQ pid=5337 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:48.642165 sshd[5334]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:48.644000 audit[5334]: USER_END pid=5334 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:48.648198 systemd-logind[1733]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:16:48.651353 systemd[1]: sshd@7-172.31.18.155:22-147.75.109.163:50620.service: Deactivated successfully. Feb 9 19:16:48.652973 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:16:48.655600 systemd-logind[1733]: Removed session 8. Feb 9 19:16:48.644000 audit[5334]: CRED_DISP pid=5334 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:48.666552 kernel: audit: type=1106 audit(1707506208.644:320): pid=5334 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:48.666706 kernel: audit: type=1104 audit(1707506208.644:321): pid=5334 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:48.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.18.155:22-147.75.109.163:50620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:53.670733 systemd[1]: Started sshd@8-172.31.18.155:22-147.75.109.163:50626.service. Feb 9 19:16:53.682145 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:16:53.682221 kernel: audit: type=1130 audit(1707506213.670:323): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.18.155:22-147.75.109.163:50626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:53.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.18.155:22-147.75.109.163:50626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:53.842000 audit[5369]: USER_ACCT pid=5369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:53.843490 sshd[5369]: Accepted publickey for core from 147.75.109.163 port 50626 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:53.846629 sshd[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:53.845000 audit[5369]: CRED_ACQ pid=5369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:53.853922 kernel: audit: type=1101 audit(1707506213.842:324): pid=5369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:53.869948 kernel: audit: type=1103 audit(1707506213.845:325): pid=5369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:53.870132 kernel: audit: type=1006 audit(1707506213.845:326): pid=5369 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Feb 9 19:16:53.845000 audit[5369]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdec45840 a2=3 a3=1 items=0 ppid=1 pid=5369 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:53.880257 kernel: audit: type=1300 audit(1707506213.845:326): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdec45840 a2=3 a3=1 items=0 ppid=1 pid=5369 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:53.880956 kernel: audit: type=1327 audit(1707506213.845:326): proctitle=737368643A20636F7265205B707269765D Feb 9 19:16:53.845000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:16:53.888979 systemd[1]: Started session-9.scope. Feb 9 19:16:53.889447 systemd-logind[1733]: New session 9 of user core. Feb 9 19:16:53.901000 audit[5369]: USER_START pid=5369 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:53.904000 audit[5372]: CRED_ACQ pid=5372 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:53.931456 kernel: audit: type=1105 audit(1707506213.901:327): pid=5369 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:53.931669 kernel: audit: type=1103 audit(1707506213.904:328): pid=5372 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:54.136404 sshd[5369]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:54.137000 audit[5369]: USER_END pid=5369 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:54.142676 systemd[1]: sshd@8-172.31.18.155:22-147.75.109.163:50626.service: Deactivated successfully. Feb 9 19:16:54.144562 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:16:54.139000 audit[5369]: CRED_DISP pid=5369 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:54.160249 kernel: audit: type=1106 audit(1707506214.137:329): pid=5369 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:54.160415 kernel: audit: type=1104 audit(1707506214.139:330): pid=5369 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:54.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.18.155:22-147.75.109.163:50626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:54.160594 systemd-logind[1733]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:16:54.164176 systemd-logind[1733]: Removed session 9. Feb 9 19:16:57.627684 systemd[1]: run-containerd-runc-k8s.io-c6f4c3569b2a573f9be026a6ae8f86427ea67b4b025efac18e6a297cbc941376-runc.DHLoWn.mount: Deactivated successfully. Feb 9 19:16:57.812293 systemd[1]: run-containerd-runc-k8s.io-6f79caf321c8874bafdc8568d95e32e279ac0301f1bbc7f8f5c0fad242cb7994-runc.dzjwbY.mount: Deactivated successfully. Feb 9 19:16:59.174010 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:16:59.174179 kernel: audit: type=1130 audit(1707506219.161:332): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.18.155:22-147.75.109.163:39166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:59.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.18.155:22-147.75.109.163:39166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:59.162901 systemd[1]: Started sshd@9-172.31.18.155:22-147.75.109.163:39166.service. Feb 9 19:16:59.336140 sshd[5431]: Accepted publickey for core from 147.75.109.163 port 39166 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:59.334000 audit[5431]: USER_ACCT pid=5431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:59.346015 sshd[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:59.343000 audit[5431]: CRED_ACQ pid=5431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:59.359113 kernel: audit: type=1101 audit(1707506219.334:333): pid=5431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:59.359322 kernel: audit: type=1103 audit(1707506219.343:334): pid=5431 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:59.359389 kernel: audit: type=1006 audit(1707506219.343:335): pid=5431 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 9 19:16:59.343000 audit[5431]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffbe1f130 a2=3 a3=1 items=0 ppid=1 pid=5431 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:59.375556 kernel: audit: type=1300 audit(1707506219.343:335): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffbe1f130 a2=3 a3=1 items=0 ppid=1 pid=5431 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:59.343000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:16:59.379964 kernel: audit: type=1327 audit(1707506219.343:335): proctitle=737368643A20636F7265205B707269765D Feb 9 19:16:59.386002 systemd-logind[1733]: New session 10 of user core. Feb 9 19:16:59.387440 systemd[1]: Started session-10.scope. Feb 9 19:16:59.398000 audit[5431]: USER_START pid=5431 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:59.412890 kernel: audit: type=1105 audit(1707506219.398:336): pid=5431 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:59.411000 audit[5434]: CRED_ACQ pid=5434 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:59.423816 kernel: audit: type=1103 audit(1707506219.411:337): pid=5434 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:59.673029 sshd[5431]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:59.673000 audit[5431]: USER_END pid=5431 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:59.673000 audit[5431]: CRED_DISP pid=5431 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:59.687452 systemd[1]: sshd@9-172.31.18.155:22-147.75.109.163:39166.service: Deactivated successfully. Feb 9 19:16:59.692958 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:16:59.694707 systemd-logind[1733]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:16:59.696011 kernel: audit: type=1106 audit(1707506219.673:338): pid=5431 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:59.696157 kernel: audit: type=1104 audit(1707506219.673:339): pid=5431 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:16:59.698346 systemd-logind[1733]: Removed session 10. Feb 9 19:16:59.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.18.155:22-147.75.109.163:39166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:04.701281 systemd[1]: Started sshd@10-172.31.18.155:22-147.75.109.163:43432.service. Feb 9 19:17:04.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.18.155:22-147.75.109.163:43432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:04.709789 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:17:04.709912 kernel: audit: type=1130 audit(1707506224.700:341): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.18.155:22-147.75.109.163:43432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:04.874000 audit[5447]: USER_ACCT pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:04.879609 sshd[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:04.882112 sshd[5447]: Accepted publickey for core from 147.75.109.163 port 43432 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:04.876000 audit[5447]: CRED_ACQ pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:04.896537 kernel: audit: type=1101 audit(1707506224.874:342): pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:04.896706 kernel: audit: type=1103 audit(1707506224.876:343): pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:04.902818 kernel: audit: type=1006 audit(1707506224.877:344): pid=5447 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 9 19:17:04.877000 audit[5447]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeae42570 a2=3 a3=1 items=0 ppid=1 pid=5447 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:04.908910 systemd[1]: Started session-11.scope. Feb 9 19:17:04.910856 systemd-logind[1733]: New session 11 of user core. Feb 9 19:17:04.913912 kernel: audit: type=1300 audit(1707506224.877:344): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeae42570 a2=3 a3=1 items=0 ppid=1 pid=5447 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:04.914024 kernel: audit: type=1327 audit(1707506224.877:344): proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:04.877000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:04.925000 audit[5447]: USER_START pid=5447 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:04.929000 audit[5450]: CRED_ACQ pid=5450 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:04.948616 kernel: audit: type=1105 audit(1707506224.925:345): pid=5447 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:04.948809 kernel: audit: type=1103 audit(1707506224.929:346): pid=5450 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:05.170974 sshd[5447]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:05.172000 audit[5447]: USER_END pid=5447 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:05.177366 systemd[1]: sshd@10-172.31.18.155:22-147.75.109.163:43432.service: Deactivated successfully. Feb 9 19:17:05.179012 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:17:05.187338 systemd-logind[1733]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:17:05.172000 audit[5447]: CRED_DISP pid=5447 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:05.197499 kernel: audit: type=1106 audit(1707506225.172:347): pid=5447 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:05.197657 kernel: audit: type=1104 audit(1707506225.172:348): pid=5447 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:05.197023 systemd[1]: Started sshd@11-172.31.18.155:22-147.75.109.163:43444.service. Feb 9 19:17:05.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.18.155:22-147.75.109.163:43432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:05.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.18.155:22-147.75.109.163:43444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:05.206387 systemd-logind[1733]: Removed session 11. Feb 9 19:17:05.382406 sshd[5461]: Accepted publickey for core from 147.75.109.163 port 43444 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:05.380000 audit[5461]: USER_ACCT pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:05.382000 audit[5461]: CRED_ACQ pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:05.383000 audit[5461]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff5ba950 a2=3 a3=1 items=0 ppid=1 pid=5461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:05.383000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:05.386023 sshd[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:05.396212 systemd[1]: Started session-12.scope. Feb 9 19:17:05.396785 systemd-logind[1733]: New session 12 of user core. Feb 9 19:17:05.410000 audit[5461]: USER_START pid=5461 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:05.413000 audit[5464]: CRED_ACQ pid=5464 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:07.551910 sshd[5461]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:07.552000 audit[5461]: USER_END pid=5461 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:07.553000 audit[5461]: CRED_DISP pid=5461 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:07.559110 systemd[1]: sshd@11-172.31.18.155:22-147.75.109.163:43444.service: Deactivated successfully. Feb 9 19:17:07.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.18.155:22-147.75.109.163:43444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:07.561663 systemd-logind[1733]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:17:07.563262 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:17:07.566132 systemd-logind[1733]: Removed session 12. Feb 9 19:17:07.581306 systemd[1]: Started sshd@12-172.31.18.155:22-147.75.109.163:43448.service. Feb 9 19:17:07.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.18.155:22-147.75.109.163:43448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:07.778000 audit[5473]: USER_ACCT pid=5473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:07.781005 sshd[5473]: Accepted publickey for core from 147.75.109.163 port 43448 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:07.781000 audit[5473]: CRED_ACQ pid=5473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:07.781000 audit[5473]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd6dfadd0 a2=3 a3=1 items=0 ppid=1 pid=5473 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:07.781000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:07.784807 sshd[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:07.797369 systemd[1]: Started session-13.scope. Feb 9 19:17:07.799215 systemd-logind[1733]: New session 13 of user core. Feb 9 19:17:07.818000 audit[5473]: USER_START pid=5473 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:07.822000 audit[5476]: CRED_ACQ pid=5476 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:08.116203 sshd[5473]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:08.117000 audit[5473]: USER_END pid=5473 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:08.117000 audit[5473]: CRED_DISP pid=5473 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:08.122686 systemd-logind[1733]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:17:08.124777 systemd[1]: sshd@12-172.31.18.155:22-147.75.109.163:43448.service: Deactivated successfully. Feb 9 19:17:08.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.18.155:22-147.75.109.163:43448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:08.126987 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:17:08.132044 systemd-logind[1733]: Removed session 13. Feb 9 19:17:13.142973 systemd[1]: Started sshd@13-172.31.18.155:22-147.75.109.163:43458.service. Feb 9 19:17:13.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.18.155:22-147.75.109.163:43458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:13.146787 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 19:17:13.146955 kernel: audit: type=1130 audit(1707506233.141:368): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.18.155:22-147.75.109.163:43458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:13.332000 audit[5490]: USER_ACCT pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:13.345872 sshd[5490]: Accepted publickey for core from 147.75.109.163 port 43458 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:13.347062 sshd[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:13.344000 audit[5490]: CRED_ACQ pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:13.363806 kernel: audit: type=1101 audit(1707506233.332:369): pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:13.363955 kernel: audit: type=1103 audit(1707506233.344:370): pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:13.366729 systemd[1]: Started session-14.scope. Feb 9 19:17:13.370253 kernel: audit: type=1006 audit(1707506233.344:371): pid=5490 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Feb 9 19:17:13.372255 systemd-logind[1733]: New session 14 of user core. Feb 9 19:17:13.344000 audit[5490]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff55c1ac0 a2=3 a3=1 items=0 ppid=1 pid=5490 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:13.385735 kernel: audit: type=1300 audit(1707506233.344:371): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff55c1ac0 a2=3 a3=1 items=0 ppid=1 pid=5490 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:13.344000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:13.411691 kernel: audit: type=1327 audit(1707506233.344:371): proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:13.399000 audit[5490]: USER_START pid=5490 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:13.402000 audit[5493]: CRED_ACQ pid=5493 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:13.433526 kernel: audit: type=1105 audit(1707506233.399:372): pid=5490 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:13.433680 kernel: audit: type=1103 audit(1707506233.402:373): pid=5493 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:13.749668 sshd[5490]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:13.749000 audit[5490]: USER_END pid=5490 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:13.758624 systemd[1]: sshd@13-172.31.18.155:22-147.75.109.163:43458.service: Deactivated successfully. Feb 9 19:17:13.760492 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:17:13.754000 audit[5490]: CRED_DISP pid=5490 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:13.777041 kernel: audit: type=1106 audit(1707506233.749:374): pid=5490 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:13.777200 kernel: audit: type=1104 audit(1707506233.754:375): pid=5490 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:13.777912 systemd-logind[1733]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:17:13.779824 systemd-logind[1733]: Removed session 14. Feb 9 19:17:13.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.18.155:22-147.75.109.163:43458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:18.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.18.155:22-147.75.109.163:55784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:18.775960 systemd[1]: Started sshd@14-172.31.18.155:22-147.75.109.163:55784.service. Feb 9 19:17:18.779790 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:17:18.779929 kernel: audit: type=1130 audit(1707506238.775:377): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.18.155:22-147.75.109.163:55784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:18.949000 audit[5512]: USER_ACCT pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:18.950998 sshd[5512]: Accepted publickey for core from 147.75.109.163 port 55784 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:18.961000 audit[5512]: CRED_ACQ pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:18.963776 sshd[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:18.972171 kernel: audit: type=1101 audit(1707506238.949:378): pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:18.972318 kernel: audit: type=1103 audit(1707506238.961:379): pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:18.978339 kernel: audit: type=1006 audit(1707506238.961:380): pid=5512 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Feb 9 19:17:18.961000 audit[5512]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcf555490 a2=3 a3=1 items=0 ppid=1 pid=5512 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:18.989047 kernel: audit: type=1300 audit(1707506238.961:380): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcf555490 a2=3 a3=1 items=0 ppid=1 pid=5512 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:18.961000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:18.993784 kernel: audit: type=1327 audit(1707506238.961:380): proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:18.997185 systemd-logind[1733]: New session 15 of user core. Feb 9 19:17:18.998680 systemd[1]: Started session-15.scope. Feb 9 19:17:19.011000 audit[5512]: USER_START pid=5512 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:19.011000 audit[5515]: CRED_ACQ pid=5515 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:19.032839 kernel: audit: type=1105 audit(1707506239.011:381): pid=5512 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:19.032937 kernel: audit: type=1103 audit(1707506239.011:382): pid=5515 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:19.253090 sshd[5512]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:19.254000 audit[5512]: USER_END pid=5512 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:19.259432 systemd[1]: sshd@14-172.31.18.155:22-147.75.109.163:55784.service: Deactivated successfully. Feb 9 19:17:19.260933 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:17:19.256000 audit[5512]: CRED_DISP pid=5512 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:19.276914 kernel: audit: type=1106 audit(1707506239.254:383): pid=5512 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:19.277075 kernel: audit: type=1104 audit(1707506239.256:384): pid=5512 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:19.277349 systemd-logind[1733]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:17:19.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.18.155:22-147.75.109.163:55784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:19.279871 systemd-logind[1733]: Removed session 15. Feb 9 19:17:21.161078 systemd[1]: run-containerd-runc-k8s.io-c6f4c3569b2a573f9be026a6ae8f86427ea67b4b025efac18e6a297cbc941376-runc.Z30XbZ.mount: Deactivated successfully. Feb 9 19:17:24.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.18.155:22-147.75.109.163:55800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:24.279298 systemd[1]: Started sshd@15-172.31.18.155:22-147.75.109.163:55800.service. Feb 9 19:17:24.281783 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:17:24.281854 kernel: audit: type=1130 audit(1707506244.278:386): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.18.155:22-147.75.109.163:55800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:24.451000 audit[5544]: USER_ACCT pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:24.453067 sshd[5544]: Accepted publickey for core from 147.75.109.163 port 55800 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:24.463877 kernel: audit: type=1101 audit(1707506244.451:387): pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:24.465655 sshd[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:24.464000 audit[5544]: CRED_ACQ pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:24.481689 kernel: audit: type=1103 audit(1707506244.464:388): pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:24.481908 kernel: audit: type=1006 audit(1707506244.464:389): pid=5544 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Feb 9 19:17:24.464000 audit[5544]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffe1bcbe0 a2=3 a3=1 items=0 ppid=1 pid=5544 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:24.492212 kernel: audit: type=1300 audit(1707506244.464:389): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffe1bcbe0 a2=3 a3=1 items=0 ppid=1 pid=5544 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:24.464000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:24.496710 kernel: audit: type=1327 audit(1707506244.464:389): proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:24.504224 systemd[1]: Started session-16.scope. Feb 9 19:17:24.504882 systemd-logind[1733]: New session 16 of user core. Feb 9 19:17:24.517000 audit[5544]: USER_START pid=5544 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:24.531000 audit[5547]: CRED_ACQ pid=5547 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:24.542654 kernel: audit: type=1105 audit(1707506244.517:390): pid=5544 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:24.542828 kernel: audit: type=1103 audit(1707506244.531:391): pid=5547 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:24.767027 sshd[5544]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:24.768000 audit[5544]: USER_END pid=5544 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:24.772734 systemd-logind[1733]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:17:24.775975 systemd[1]: sshd@15-172.31.18.155:22-147.75.109.163:55800.service: Deactivated successfully. Feb 9 19:17:24.777614 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:17:24.769000 audit[5544]: CRED_DISP pid=5544 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:24.780560 systemd-logind[1733]: Removed session 16. Feb 9 19:17:24.790458 kernel: audit: type=1106 audit(1707506244.768:392): pid=5544 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:24.790609 kernel: audit: type=1104 audit(1707506244.769:393): pid=5544 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:24.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.18.155:22-147.75.109.163:55800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:27.815604 systemd[1]: run-containerd-runc-k8s.io-6f79caf321c8874bafdc8568d95e32e279ac0301f1bbc7f8f5c0fad242cb7994-runc.ux94Gl.mount: Deactivated successfully. Feb 9 19:17:29.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.18.155:22-147.75.109.163:59664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:29.792896 systemd[1]: Started sshd@16-172.31.18.155:22-147.75.109.163:59664.service. Feb 9 19:17:29.795590 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:17:29.795670 kernel: audit: type=1130 audit(1707506249.792:395): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.18.155:22-147.75.109.163:59664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:29.963000 audit[5583]: USER_ACCT pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:29.964924 sshd[5583]: Accepted publickey for core from 147.75.109.163 port 59664 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:29.976821 kernel: audit: type=1101 audit(1707506249.963:396): pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:29.975000 audit[5583]: CRED_ACQ pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:29.977713 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:29.994319 kernel: audit: type=1103 audit(1707506249.975:397): pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:29.994445 kernel: audit: type=1006 audit(1707506249.976:398): pid=5583 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 9 19:17:29.994506 kernel: audit: type=1300 audit(1707506249.976:398): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffd539b80 a2=3 a3=1 items=0 ppid=1 pid=5583 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:29.976000 audit[5583]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffd539b80 a2=3 a3=1 items=0 ppid=1 pid=5583 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:29.993667 systemd[1]: Started session-17.scope. Feb 9 19:17:29.995393 systemd-logind[1733]: New session 17 of user core. Feb 9 19:17:30.004014 kernel: audit: type=1327 audit(1707506249.976:398): proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:29.976000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:30.013000 audit[5583]: USER_START pid=5583 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:30.016000 audit[5586]: CRED_ACQ pid=5586 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:30.035442 kernel: audit: type=1105 audit(1707506250.013:399): pid=5583 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:30.035515 kernel: audit: type=1103 audit(1707506250.016:400): pid=5586 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:30.249549 sshd[5583]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:30.250000 audit[5583]: USER_END pid=5583 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:30.254371 systemd[1]: sshd@16-172.31.18.155:22-147.75.109.163:59664.service: Deactivated successfully. Feb 9 19:17:30.256258 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:17:30.251000 audit[5583]: CRED_DISP pid=5583 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:30.274362 kernel: audit: type=1106 audit(1707506250.250:401): pid=5583 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:30.274502 kernel: audit: type=1104 audit(1707506250.251:402): pid=5583 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:30.273557 systemd-logind[1733]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:17:30.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.18.155:22-147.75.109.163:59664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:30.278272 systemd-logind[1733]: Removed session 17. Feb 9 19:17:30.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.18.155:22-147.75.109.163:59674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:30.283432 systemd[1]: Started sshd@17-172.31.18.155:22-147.75.109.163:59674.service. Feb 9 19:17:30.457000 audit[5596]: USER_ACCT pid=5596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:30.458510 sshd[5596]: Accepted publickey for core from 147.75.109.163 port 59674 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:30.459000 audit[5596]: CRED_ACQ pid=5596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:30.459000 audit[5596]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd8ef3160 a2=3 a3=1 items=0 ppid=1 pid=5596 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:30.459000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:30.461729 sshd[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:30.470427 systemd-logind[1733]: New session 18 of user core. Feb 9 19:17:30.471444 systemd[1]: Started session-18.scope. Feb 9 19:17:30.482000 audit[5596]: USER_START pid=5596 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:30.484000 audit[5601]: CRED_ACQ pid=5601 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:30.914430 sshd[5596]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:30.915000 audit[5596]: USER_END pid=5596 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:30.915000 audit[5596]: CRED_DISP pid=5596 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:30.919803 systemd-logind[1733]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:17:30.920540 systemd[1]: sshd@17-172.31.18.155:22-147.75.109.163:59674.service: Deactivated successfully. Feb 9 19:17:30.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.18.155:22-147.75.109.163:59674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:30.922835 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:17:30.924847 systemd-logind[1733]: Removed session 18. Feb 9 19:17:30.940440 systemd[1]: Started sshd@18-172.31.18.155:22-147.75.109.163:59686.service. Feb 9 19:17:30.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.18.155:22-147.75.109.163:59686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:31.113000 audit[5609]: USER_ACCT pid=5609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:31.114741 sshd[5609]: Accepted publickey for core from 147.75.109.163 port 59686 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:31.115000 audit[5609]: CRED_ACQ pid=5609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:31.115000 audit[5609]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd48a7320 a2=3 a3=1 items=0 ppid=1 pid=5609 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:31.115000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:31.117252 sshd[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:31.124389 systemd-logind[1733]: New session 19 of user core. Feb 9 19:17:31.126161 systemd[1]: Started session-19.scope. Feb 9 19:17:31.139000 audit[5609]: USER_START pid=5609 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:31.142000 audit[5612]: CRED_ACQ pid=5612 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:32.566394 sshd[5609]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:32.568000 audit[5609]: USER_END pid=5609 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:32.568000 audit[5609]: CRED_DISP pid=5609 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:32.572589 systemd-logind[1733]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:17:32.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.18.155:22-147.75.109.163:59686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:32.573428 systemd[1]: sshd@18-172.31.18.155:22-147.75.109.163:59686.service: Deactivated successfully. Feb 9 19:17:32.575171 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:17:32.579918 systemd-logind[1733]: Removed session 19. Feb 9 19:17:32.592392 systemd[1]: Started sshd@19-172.31.18.155:22-147.75.109.163:59690.service. Feb 9 19:17:32.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.18.155:22-147.75.109.163:59690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:32.708000 audit[5653]: NETFILTER_CFG table=filter:125 family=2 entries=18 op=nft_register_rule pid=5653 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:32.708000 audit[5653]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=fffffe6313d0 a2=0 a3=ffffb39696c0 items=0 ppid=3237 pid=5653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:32.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:32.711000 audit[5653]: NETFILTER_CFG table=nat:126 family=2 entries=78 op=nft_register_rule pid=5653 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:32.711000 audit[5653]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffffe6313d0 a2=0 a3=ffffb39696c0 items=0 ppid=3237 pid=5653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:32.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:32.785246 sshd[5635]: Accepted publickey for core from 147.75.109.163 port 59690 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:32.784000 audit[5635]: USER_ACCT pid=5635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:32.786000 audit[5635]: CRED_ACQ pid=5635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:32.786000 audit[5635]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffde149c70 a2=3 a3=1 items=0 ppid=1 pid=5635 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:32.786000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:32.788103 sshd[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:32.798451 systemd[1]: Started session-20.scope. Feb 9 19:17:32.799158 systemd-logind[1733]: New session 20 of user core. Feb 9 19:17:32.810000 audit[5679]: NETFILTER_CFG table=filter:127 family=2 entries=30 op=nft_register_rule pid=5679 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:32.810000 audit[5679]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffec684ce0 a2=0 a3=ffff8e3956c0 items=0 ppid=3237 pid=5679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:32.810000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:32.820000 audit[5635]: USER_START pid=5635 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:32.824000 audit[5681]: CRED_ACQ pid=5681 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:32.824000 audit[5679]: NETFILTER_CFG table=nat:128 family=2 entries=78 op=nft_register_rule pid=5679 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:32.824000 audit[5679]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffec684ce0 a2=0 a3=ffff8e3956c0 items=0 ppid=3237 pid=5679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:32.824000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:33.329109 sshd[5635]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:33.331000 audit[5635]: USER_END pid=5635 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:33.332000 audit[5635]: CRED_DISP pid=5635 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:33.335525 systemd[1]: sshd@19-172.31.18.155:22-147.75.109.163:59690.service: Deactivated successfully. Feb 9 19:17:33.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.18.155:22-147.75.109.163:59690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:33.337732 systemd-logind[1733]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:17:33.338080 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:17:33.341477 systemd-logind[1733]: Removed session 20. Feb 9 19:17:33.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.18.155:22-147.75.109.163:59692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:33.354372 systemd[1]: Started sshd@20-172.31.18.155:22-147.75.109.163:59692.service. Feb 9 19:17:33.524000 audit[5689]: USER_ACCT pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:33.525134 sshd[5689]: Accepted publickey for core from 147.75.109.163 port 59692 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:33.526000 audit[5689]: CRED_ACQ pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:33.526000 audit[5689]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe88ee8b0 a2=3 a3=1 items=0 ppid=1 pid=5689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:33.526000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:33.528354 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:33.537852 systemd-logind[1733]: New session 21 of user core. Feb 9 19:17:33.537987 systemd[1]: Started session-21.scope. Feb 9 19:17:33.548000 audit[5689]: USER_START pid=5689 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:33.551000 audit[5692]: CRED_ACQ pid=5692 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:33.809792 sshd[5689]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:33.811000 audit[5689]: USER_END pid=5689 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:33.811000 audit[5689]: CRED_DISP pid=5689 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:33.816010 systemd[1]: sshd@20-172.31.18.155:22-147.75.109.163:59692.service: Deactivated successfully. Feb 9 19:17:33.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.18.155:22-147.75.109.163:59692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:33.817822 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:17:33.818510 systemd-logind[1733]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:17:33.821148 systemd-logind[1733]: Removed session 21. Feb 9 19:17:38.835396 systemd[1]: Started sshd@21-172.31.18.155:22-147.75.109.163:60156.service. Feb 9 19:17:38.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.18.155:22-147.75.109.163:60156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:38.843780 kernel: kauditd_printk_skb: 57 callbacks suppressed Feb 9 19:17:38.843919 kernel: audit: type=1130 audit(1707506258.834:444): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.18.155:22-147.75.109.163:60156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:39.005000 audit[5702]: USER_ACCT pid=5702 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:39.007567 sshd[5702]: Accepted publickey for core from 147.75.109.163 port 60156 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:39.020534 sshd[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:39.021999 kernel: audit: type=1101 audit(1707506259.005:445): pid=5702 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:39.022133 kernel: audit: type=1103 audit(1707506259.018:446): pid=5702 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:39.018000 audit[5702]: CRED_ACQ pid=5702 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:39.037673 kernel: audit: type=1006 audit(1707506259.018:447): pid=5702 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Feb 9 19:17:39.018000 audit[5702]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd25898e0 a2=3 a3=1 items=0 ppid=1 pid=5702 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:39.048573 kernel: audit: type=1300 audit(1707506259.018:447): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd25898e0 a2=3 a3=1 items=0 ppid=1 pid=5702 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:39.018000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:39.052746 kernel: audit: type=1327 audit(1707506259.018:447): proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:39.054262 systemd-logind[1733]: New session 22 of user core. Feb 9 19:17:39.055250 systemd[1]: Started session-22.scope. Feb 9 19:17:39.073000 audit[5702]: USER_START pid=5702 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:39.091165 kernel: audit: type=1105 audit(1707506259.073:448): pid=5702 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:39.091323 kernel: audit: type=1103 audit(1707506259.076:449): pid=5705 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:39.076000 audit[5705]: CRED_ACQ pid=5705 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:39.334097 sshd[5702]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:39.341000 audit[5702]: USER_END pid=5702 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:39.346831 systemd[1]: sshd@21-172.31.18.155:22-147.75.109.163:60156.service: Deactivated successfully. Feb 9 19:17:39.349337 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:17:39.356220 systemd-logind[1733]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:17:39.341000 audit[5702]: CRED_DISP pid=5702 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:39.366646 kernel: audit: type=1106 audit(1707506259.341:450): pid=5702 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:39.366795 kernel: audit: type=1104 audit(1707506259.341:451): pid=5702 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:39.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.18.155:22-147.75.109.163:60156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:39.367960 systemd-logind[1733]: Removed session 22. Feb 9 19:17:41.681000 audit[5739]: NETFILTER_CFG table=filter:129 family=2 entries=18 op=nft_register_rule pid=5739 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:41.681000 audit[5739]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd2ec9db0 a2=0 a3=ffff925aa6c0 items=0 ppid=3237 pid=5739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:41.681000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:41.697000 audit[5739]: NETFILTER_CFG table=nat:130 family=2 entries=162 op=nft_register_chain pid=5739 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:41.697000 audit[5739]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffd2ec9db0 a2=0 a3=ffff925aa6c0 items=0 ppid=3237 pid=5739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:41.697000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:44.360636 systemd[1]: Started sshd@22-172.31.18.155:22-147.75.109.163:60160.service. Feb 9 19:17:44.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.18.155:22-147.75.109.163:60160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:44.363761 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 9 19:17:44.363847 kernel: audit: type=1130 audit(1707506264.361:455): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.18.155:22-147.75.109.163:60160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:44.542000 audit[5741]: USER_ACCT pid=5741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:44.545940 sshd[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:44.553943 sshd[5741]: Accepted publickey for core from 147.75.109.163 port 60160 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:44.562377 systemd-logind[1733]: New session 23 of user core. Feb 9 19:17:44.565042 systemd[1]: Started session-23.scope. Feb 9 19:17:44.544000 audit[5741]: CRED_ACQ pid=5741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:44.568855 kernel: audit: type=1101 audit(1707506264.542:456): pid=5741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:44.568960 kernel: audit: type=1103 audit(1707506264.544:457): pid=5741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:44.544000 audit[5741]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff73c1650 a2=3 a3=1 items=0 ppid=1 pid=5741 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:44.616804 kernel: audit: type=1006 audit(1707506264.544:458): pid=5741 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Feb 9 19:17:44.627123 kubelet[3035]: I0209 19:17:44.617665 3035 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:17:44.544000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:44.649320 kernel: audit: type=1300 audit(1707506264.544:458): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff73c1650 a2=3 a3=1 items=0 ppid=1 pid=5741 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:44.649468 kernel: audit: type=1327 audit(1707506264.544:458): proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:44.586000 audit[5741]: USER_START pid=5741 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:44.589000 audit[5744]: CRED_ACQ pid=5744 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:44.667913 kernel: audit: type=1105 audit(1707506264.586:459): pid=5741 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:44.687794 kernel: audit: type=1103 audit(1707506264.589:460): pid=5744 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:44.768967 kubelet[3035]: I0209 19:17:44.768832 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84sqj\" (UniqueName: \"kubernetes.io/projected/a0525892-ab60-444e-8723-bd774dacd542-kube-api-access-84sqj\") pod \"calico-apiserver-78fcdc4568-t7sl2\" (UID: \"a0525892-ab60-444e-8723-bd774dacd542\") " pod="calico-apiserver/calico-apiserver-78fcdc4568-t7sl2" Feb 9 19:17:44.769128 kubelet[3035]: I0209 19:17:44.769027 3035 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a0525892-ab60-444e-8723-bd774dacd542-calico-apiserver-certs\") pod \"calico-apiserver-78fcdc4568-t7sl2\" (UID: \"a0525892-ab60-444e-8723-bd774dacd542\") " pod="calico-apiserver/calico-apiserver-78fcdc4568-t7sl2" Feb 9 19:17:44.802000 audit[5777]: NETFILTER_CFG table=filter:131 family=2 entries=6 op=nft_register_rule pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:44.802000 audit[5777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffff20fada0 a2=0 a3=ffffadc9d6c0 items=0 ppid=3237 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:44.822591 kernel: audit: type=1325 audit(1707506264.802:461): table=filter:131 family=2 entries=6 op=nft_register_rule pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:44.822737 kernel: audit: type=1300 audit(1707506264.802:461): arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffff20fada0 a2=0 a3=ffffadc9d6c0 items=0 ppid=3237 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:44.802000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:44.837000 audit[5777]: NETFILTER_CFG table=nat:132 family=2 entries=198 op=nft_register_rule pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:44.837000 audit[5777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=fffff20fada0 a2=0 a3=ffffadc9d6c0 items=0 ppid=3237 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:44.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:44.870387 kubelet[3035]: E0209 19:17:44.870332 3035 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 19:17:44.870594 kubelet[3035]: E0209 19:17:44.870491 3035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0525892-ab60-444e-8723-bd774dacd542-calico-apiserver-certs podName:a0525892-ab60-444e-8723-bd774dacd542 nodeName:}" failed. No retries permitted until 2024-02-09 19:17:45.370441589 +0000 UTC m=+120.954215751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a0525892-ab60-444e-8723-bd774dacd542-calico-apiserver-certs") pod "calico-apiserver-78fcdc4568-t7sl2" (UID: "a0525892-ab60-444e-8723-bd774dacd542") : secret "calico-apiserver-certs" not found Feb 9 19:17:44.961671 sshd[5741]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:44.962000 audit[5741]: USER_END pid=5741 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:44.962000 audit[5741]: CRED_DISP pid=5741 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:44.967018 systemd-logind[1733]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:17:44.967372 systemd[1]: sshd@22-172.31.18.155:22-147.75.109.163:60160.service: Deactivated successfully. Feb 9 19:17:44.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.18.155:22-147.75.109.163:60160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:44.969827 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:17:44.972114 systemd-logind[1733]: Removed session 23. Feb 9 19:17:45.027000 audit[5807]: NETFILTER_CFG table=filter:133 family=2 entries=7 op=nft_register_rule pid=5807 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:45.027000 audit[5807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=fffff6caa610 a2=0 a3=ffffbd1676c0 items=0 ppid=3237 pid=5807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:45.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:45.036000 audit[5807]: NETFILTER_CFG table=nat:134 family=2 entries=198 op=nft_register_rule pid=5807 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:45.036000 audit[5807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=fffff6caa610 a2=0 a3=ffffbd1676c0 items=0 ppid=3237 pid=5807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:45.036000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:45.525320 env[1741]: time="2024-02-09T19:17:45.525224374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78fcdc4568-t7sl2,Uid:a0525892-ab60-444e-8723-bd774dacd542,Namespace:calico-apiserver,Attempt:0,}" Feb 9 19:17:45.821483 systemd-networkd[1533]: calif547296c16e: Link UP Feb 9 19:17:45.830958 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:17:45.831104 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif547296c16e: link becomes ready Feb 9 19:17:45.829827 systemd-networkd[1533]: calif547296c16e: Gained carrier Feb 9 19:17:45.834362 (udev-worker)[5828]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.638 [INFO][5810] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--155-k8s-calico--apiserver--78fcdc4568--t7sl2-eth0 calico-apiserver-78fcdc4568- calico-apiserver a0525892-ab60-444e-8723-bd774dacd542 1109 0 2024-02-09 19:17:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78fcdc4568 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-155 calico-apiserver-78fcdc4568-t7sl2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif547296c16e [] []}} ContainerID="6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" Namespace="calico-apiserver" Pod="calico-apiserver-78fcdc4568-t7sl2" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--apiserver--78fcdc4568--t7sl2-" Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.639 [INFO][5810] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" Namespace="calico-apiserver" Pod="calico-apiserver-78fcdc4568-t7sl2" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--apiserver--78fcdc4568--t7sl2-eth0" Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.733 [INFO][5821] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" HandleID="k8s-pod-network.6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" Workload="ip--172--31--18--155-k8s-calico--apiserver--78fcdc4568--t7sl2-eth0" Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.752 [INFO][5821] ipam_plugin.go 268: Auto assigning IP ContainerID="6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" HandleID="k8s-pod-network.6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" Workload="ip--172--31--18--155-k8s-calico--apiserver--78fcdc4568--t7sl2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002deae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-155", "pod":"calico-apiserver-78fcdc4568-t7sl2", "timestamp":"2024-02-09 19:17:45.733252731 +0000 UTC"}, Hostname:"ip-172-31-18-155", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.752 [INFO][5821] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.752 [INFO][5821] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.752 [INFO][5821] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-155' Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.755 [INFO][5821] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" host="ip-172-31-18-155" Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.761 [INFO][5821] ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-155" Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.768 [INFO][5821] ipam.go 489: Trying affinity for 192.168.95.64/26 host="ip-172-31-18-155" Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.772 [INFO][5821] ipam.go 155: Attempting to load block cidr=192.168.95.64/26 host="ip-172-31-18-155" Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.786 [INFO][5821] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ip-172-31-18-155" Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.786 [INFO][5821] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" host="ip-172-31-18-155" Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.794 [INFO][5821] ipam.go 1682: Creating new handle: k8s-pod-network.6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061 Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.801 [INFO][5821] ipam.go 1203: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" host="ip-172-31-18-155" Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.813 [INFO][5821] ipam.go 1216: Successfully claimed IPs: [192.168.95.69/26] block=192.168.95.64/26 handle="k8s-pod-network.6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" host="ip-172-31-18-155" Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.813 [INFO][5821] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.95.69/26] handle="k8s-pod-network.6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" host="ip-172-31-18-155" Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.813 [INFO][5821] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:17:45.862406 env[1741]: 2024-02-09 19:17:45.813 [INFO][5821] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.95.69/26] IPv6=[] ContainerID="6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" HandleID="k8s-pod-network.6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" Workload="ip--172--31--18--155-k8s-calico--apiserver--78fcdc4568--t7sl2-eth0" Feb 9 19:17:45.863671 env[1741]: 2024-02-09 19:17:45.816 [INFO][5810] k8s.go 385: Populated endpoint ContainerID="6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" Namespace="calico-apiserver" Pod="calico-apiserver-78fcdc4568-t7sl2" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--apiserver--78fcdc4568--t7sl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-calico--apiserver--78fcdc4568--t7sl2-eth0", GenerateName:"calico-apiserver-78fcdc4568-", Namespace:"calico-apiserver", SelfLink:"", UID:"a0525892-ab60-444e-8723-bd774dacd542", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78fcdc4568", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"", Pod:"calico-apiserver-78fcdc4568-t7sl2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif547296c16e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:17:45.863671 env[1741]: 2024-02-09 19:17:45.816 [INFO][5810] k8s.go 386: Calico CNI using IPs: [192.168.95.69/32] ContainerID="6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" Namespace="calico-apiserver" Pod="calico-apiserver-78fcdc4568-t7sl2" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--apiserver--78fcdc4568--t7sl2-eth0" Feb 9 19:17:45.863671 env[1741]: 2024-02-09 19:17:45.816 [INFO][5810] dataplane_linux.go 68: Setting the host side veth name to calif547296c16e ContainerID="6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" Namespace="calico-apiserver" Pod="calico-apiserver-78fcdc4568-t7sl2" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--apiserver--78fcdc4568--t7sl2-eth0" Feb 9 19:17:45.863671 env[1741]: 2024-02-09 19:17:45.832 [INFO][5810] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" Namespace="calico-apiserver" Pod="calico-apiserver-78fcdc4568-t7sl2" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--apiserver--78fcdc4568--t7sl2-eth0" Feb 9 19:17:45.863671 env[1741]: 2024-02-09 19:17:45.832 [INFO][5810] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" Namespace="calico-apiserver" Pod="calico-apiserver-78fcdc4568-t7sl2" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--apiserver--78fcdc4568--t7sl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--155-k8s-calico--apiserver--78fcdc4568--t7sl2-eth0", GenerateName:"calico-apiserver-78fcdc4568-", Namespace:"calico-apiserver", SelfLink:"", UID:"a0525892-ab60-444e-8723-bd774dacd542", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78fcdc4568", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-155", ContainerID:"6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061", Pod:"calico-apiserver-78fcdc4568-t7sl2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif547296c16e", MAC:"9e:4d:81:35:a3:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:17:45.863671 env[1741]: 2024-02-09 19:17:45.844 [INFO][5810] k8s.go 491: Wrote updated endpoint to datastore ContainerID="6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061" Namespace="calico-apiserver" Pod="calico-apiserver-78fcdc4568-t7sl2" WorkloadEndpoint="ip--172--31--18--155-k8s-calico--apiserver--78fcdc4568--t7sl2-eth0" Feb 9 19:17:45.952974 env[1741]: time="2024-02-09T19:17:45.921943382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:17:45.952974 env[1741]: time="2024-02-09T19:17:45.922036952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:17:45.952974 env[1741]: time="2024-02-09T19:17:45.922064110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:17:45.952974 env[1741]: time="2024-02-09T19:17:45.922485444Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061 pid=5852 runtime=io.containerd.runc.v2 Feb 9 19:17:46.013000 audit[5869]: NETFILTER_CFG table=filter:135 family=2 entries=61 op=nft_register_chain pid=5869 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:17:46.013000 audit[5869]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30940 a0=3 a1=ffffe5b7ae70 a2=0 a3=ffff91f74fa8 items=0 ppid=4258 pid=5869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:46.013000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:17:46.054317 systemd[1]: run-containerd-runc-k8s.io-6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061-runc.4WpC6C.mount: Deactivated successfully. Feb 9 19:17:46.181042 env[1741]: time="2024-02-09T19:17:46.180843923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78fcdc4568-t7sl2,Uid:a0525892-ab60-444e-8723-bd774dacd542,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061\"" Feb 9 19:17:46.186993 env[1741]: time="2024-02-09T19:17:46.186923587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 19:17:47.618106 systemd-networkd[1533]: calif547296c16e: Gained IPv6LL Feb 9 19:17:49.185904 env[1741]: time="2024-02-09T19:17:49.185815831Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:49.191558 env[1741]: time="2024-02-09T19:17:49.189949066Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:49.195920 env[1741]: time="2024-02-09T19:17:49.194352642Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:49.199631 env[1741]: time="2024-02-09T19:17:49.198278815Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:49.200028 env[1741]: time="2024-02-09T19:17:49.199556381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9\"" Feb 9 19:17:49.213338 env[1741]: time="2024-02-09T19:17:49.213277839Z" level=info msg="CreateContainer within sandbox \"6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 19:17:49.238803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount863795311.mount: Deactivated successfully. Feb 9 19:17:49.246845 env[1741]: time="2024-02-09T19:17:49.246691619Z" level=info msg="CreateContainer within sandbox \"6419f93d1e23763b50e3912d9cc193f71517aa37866a97f08b0a7710d42ea061\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"76652a678a4bb1f491d8977a5b8f64e7638f65315ff5a2cc9aef9383c3301fb2\"" Feb 9 19:17:49.247938 env[1741]: time="2024-02-09T19:17:49.247888660Z" level=info msg="StartContainer for \"76652a678a4bb1f491d8977a5b8f64e7638f65315ff5a2cc9aef9383c3301fb2\"" Feb 9 19:17:49.308862 systemd[1]: run-containerd-runc-k8s.io-76652a678a4bb1f491d8977a5b8f64e7638f65315ff5a2cc9aef9383c3301fb2-runc.go7paS.mount: Deactivated successfully. Feb 9 19:17:49.394347 env[1741]: time="2024-02-09T19:17:49.394282433Z" level=info msg="StartContainer for \"76652a678a4bb1f491d8977a5b8f64e7638f65315ff5a2cc9aef9383c3301fb2\" returns successfully" Feb 9 19:17:49.649115 kernel: kauditd_printk_skb: 16 callbacks suppressed Feb 9 19:17:49.649284 kernel: audit: type=1325 audit(1707506269.640:469): table=filter:136 family=2 entries=8 op=nft_register_rule pid=5951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:49.640000 audit[5951]: NETFILTER_CFG table=filter:136 family=2 entries=8 op=nft_register_rule pid=5951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:49.640000 audit[5951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffc4e1bb60 a2=0 a3=ffffa8a9f6c0 items=0 ppid=3237 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:49.662571 kernel: audit: type=1300 audit(1707506269.640:469): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffc4e1bb60 a2=0 a3=ffffa8a9f6c0 items=0 ppid=3237 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:49.640000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:49.670761 kernel: audit: type=1327 audit(1707506269.640:469): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:49.642000 audit[5951]: NETFILTER_CFG table=nat:137 family=2 entries=198 op=nft_register_rule pid=5951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:49.676906 kernel: audit: type=1325 audit(1707506269.642:470): table=nat:137 family=2 entries=198 op=nft_register_rule pid=5951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:49.677022 kernel: audit: type=1300 audit(1707506269.642:470): arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffc4e1bb60 a2=0 a3=ffffa8a9f6c0 items=0 ppid=3237 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:49.642000 audit[5951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffc4e1bb60 a2=0 a3=ffffa8a9f6c0 items=0 ppid=3237 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:49.642000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:49.694771 kernel: audit: type=1327 audit(1707506269.642:470): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:49.751054 kubelet[3035]: I0209 19:17:49.751015 3035 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78fcdc4568-t7sl2" podStartSLOduration=-9.22337203110382e+09 pod.CreationTimestamp="2024-02-09 19:17:44 +0000 UTC" firstStartedPulling="2024-02-09 19:17:46.186401399 +0000 UTC m=+121.770175573" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:17:49.750813376 +0000 UTC m=+125.334587562" watchObservedRunningTime="2024-02-09 19:17:49.750955202 +0000 UTC m=+125.334729376" Feb 9 19:17:49.874000 audit[5977]: NETFILTER_CFG table=filter:138 family=2 entries=8 op=nft_register_rule pid=5977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:49.881791 kernel: audit: type=1325 audit(1707506269.874:471): table=filter:138 family=2 entries=8 op=nft_register_rule pid=5977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:49.874000 audit[5977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffda67caa0 a2=0 a3=ffffa8d8d6c0 items=0 ppid=3237 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:49.874000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:49.909067 kernel: audit: type=1300 audit(1707506269.874:471): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffda67caa0 a2=0 a3=ffffa8d8d6c0 items=0 ppid=3237 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:49.909280 kernel: audit: type=1327 audit(1707506269.874:471): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:49.900000 audit[5977]: NETFILTER_CFG table=nat:139 family=2 entries=198 op=nft_register_rule pid=5977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:49.916517 kernel: audit: type=1325 audit(1707506269.900:472): table=nat:139 family=2 entries=198 op=nft_register_rule pid=5977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:17:49.900000 audit[5977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffda67caa0 a2=0 a3=ffffa8d8d6c0 items=0 ppid=3237 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:49.900000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:17:49.988642 systemd[1]: Started sshd@23-172.31.18.155:22-147.75.109.163:51594.service. Feb 9 19:17:49.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.18.155:22-147.75.109.163:51594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:50.160000 audit[5978]: USER_ACCT pid=5978 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:50.161605 sshd[5978]: Accepted publickey for core from 147.75.109.163 port 51594 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:50.163000 audit[5978]: CRED_ACQ pid=5978 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:50.163000 audit[5978]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee4e3b00 a2=3 a3=1 items=0 ppid=1 pid=5978 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:50.163000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:50.165811 sshd[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:50.184350 systemd[1]: Started session-24.scope. Feb 9 19:17:50.187155 systemd-logind[1733]: New session 24 of user core. Feb 9 19:17:50.198000 audit[5978]: USER_START pid=5978 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:50.203000 audit[5981]: CRED_ACQ pid=5981 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:50.467674 sshd[5978]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:50.468000 audit[5978]: USER_END pid=5978 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:50.468000 audit[5978]: CRED_DISP pid=5978 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:50.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.18.155:22-147.75.109.163:51594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:50.472630 systemd[1]: sshd@23-172.31.18.155:22-147.75.109.163:51594.service: Deactivated successfully. Feb 9 19:17:50.476540 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:17:50.478117 systemd-logind[1733]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:17:50.483444 systemd-logind[1733]: Removed session 24. Feb 9 19:17:55.495642 systemd[1]: Started sshd@24-172.31.18.155:22-147.75.109.163:36986.service. Feb 9 19:17:55.500714 kernel: kauditd_printk_skb: 13 callbacks suppressed Feb 9 19:17:55.501018 kernel: audit: type=1130 audit(1707506275.495:482): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.18.155:22-147.75.109.163:36986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:55.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.18.155:22-147.75.109.163:36986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:55.678000 audit[6014]: USER_ACCT pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:55.680320 sshd[6014]: Accepted publickey for core from 147.75.109.163 port 36986 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:55.691805 kernel: audit: type=1101 audit(1707506275.678:483): pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:55.690000 audit[6014]: CRED_ACQ pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:55.693536 sshd[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:55.707823 kernel: audit: type=1103 audit(1707506275.690:484): pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:55.707968 kernel: audit: type=1006 audit(1707506275.690:485): pid=6014 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 9 19:17:55.690000 audit[6014]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd79e9f20 a2=3 a3=1 items=0 ppid=1 pid=6014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:55.718684 kernel: audit: type=1300 audit(1707506275.690:485): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd79e9f20 a2=3 a3=1 items=0 ppid=1 pid=6014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:55.690000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:55.724833 kernel: audit: type=1327 audit(1707506275.690:485): proctitle=737368643A20636F7265205B707269765D Feb 9 19:17:55.724416 systemd[1]: Started session-25.scope. Feb 9 19:17:55.725210 systemd-logind[1733]: New session 25 of user core. Feb 9 19:17:55.734000 audit[6014]: USER_START pid=6014 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:55.737000 audit[6017]: CRED_ACQ pid=6017 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:55.761092 kernel: audit: type=1105 audit(1707506275.734:486): pid=6014 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:55.761319 kernel: audit: type=1103 audit(1707506275.737:487): pid=6017 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:55.985511 sshd[6014]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:55.985000 audit[6014]: USER_END pid=6014 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:55.990579 systemd[1]: sshd@24-172.31.18.155:22-147.75.109.163:36986.service: Deactivated successfully. Feb 9 19:17:55.992295 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:17:55.986000 audit[6014]: CRED_DISP pid=6014 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:56.001063 systemd-logind[1733]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:17:56.002928 systemd-logind[1733]: Removed session 25. Feb 9 19:17:56.008664 kernel: audit: type=1106 audit(1707506275.985:488): pid=6014 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:56.008853 kernel: audit: type=1104 audit(1707506275.986:489): pid=6014 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:17:55.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.18.155:22-147.75.109.163:36986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:57.621579 systemd[1]: run-containerd-runc-k8s.io-c6f4c3569b2a573f9be026a6ae8f86427ea67b4b025efac18e6a297cbc941376-runc.vUVCTe.mount: Deactivated successfully. Feb 9 19:17:57.816258 systemd[1]: run-containerd-runc-k8s.io-6f79caf321c8874bafdc8568d95e32e279ac0301f1bbc7f8f5c0fad242cb7994-runc.7d8GIR.mount: Deactivated successfully. Feb 9 19:18:01.011109 systemd[1]: Started sshd@25-172.31.18.155:22-147.75.109.163:37002.service. Feb 9 19:18:01.015087 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:18:01.015238 kernel: audit: type=1130 audit(1707506281.010:491): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.18.155:22-147.75.109.163:37002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:18:01.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.18.155:22-147.75.109.163:37002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:18:01.216000 audit[6077]: USER_ACCT pid=6077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:01.219015 sshd[6077]: Accepted publickey for core from 147.75.109.163 port 37002 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:18:01.223008 sshd[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:01.216000 audit[6077]: CRED_ACQ pid=6077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:01.238234 kernel: audit: type=1101 audit(1707506281.216:492): pid=6077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:01.238504 kernel: audit: type=1103 audit(1707506281.216:493): pid=6077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:01.245946 kernel: audit: type=1006 audit(1707506281.216:494): pid=6077 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Feb 9 19:18:01.246102 kernel: audit: type=1300 audit(1707506281.216:494): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc8cb8020 a2=3 a3=1 items=0 ppid=1 pid=6077 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:18:01.216000 audit[6077]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc8cb8020 a2=3 a3=1 items=0 ppid=1 pid=6077 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:18:01.244739 systemd[1]: Started session-26.scope. Feb 9 19:18:01.249069 systemd-logind[1733]: New session 26 of user core. Feb 9 19:18:01.216000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:18:01.268248 kernel: audit: type=1327 audit(1707506281.216:494): proctitle=737368643A20636F7265205B707269765D Feb 9 19:18:01.261000 audit[6077]: USER_START pid=6077 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:01.280957 kernel: audit: type=1105 audit(1707506281.261:495): pid=6077 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:01.268000 audit[6080]: CRED_ACQ pid=6080 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:01.282229 kernel: audit: type=1103 audit(1707506281.268:496): pid=6080 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:01.508162 sshd[6077]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:01.509000 audit[6077]: USER_END pid=6077 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:01.509000 audit[6077]: CRED_DISP pid=6077 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:01.523162 systemd[1]: sshd@25-172.31.18.155:22-147.75.109.163:37002.service: Deactivated successfully. Feb 9 19:18:01.532958 kernel: audit: type=1106 audit(1707506281.509:497): pid=6077 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:01.533102 kernel: audit: type=1104 audit(1707506281.509:498): pid=6077 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:01.525305 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:18:01.532577 systemd-logind[1733]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:18:01.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.18.155:22-147.75.109.163:37002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:18:01.534999 systemd-logind[1733]: Removed session 26. Feb 9 19:18:06.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.18.155:22-147.75.109.163:43616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:18:06.534184 systemd[1]: Started sshd@26-172.31.18.155:22-147.75.109.163:43616.service. Feb 9 19:18:06.537545 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:18:06.537603 kernel: audit: type=1130 audit(1707506286.533:500): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.18.155:22-147.75.109.163:43616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:18:06.708000 audit[6092]: USER_ACCT pid=6092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:06.709528 sshd[6092]: Accepted publickey for core from 147.75.109.163 port 43616 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:18:06.720814 kernel: audit: type=1101 audit(1707506286.708:501): pid=6092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:06.720000 audit[6092]: CRED_ACQ pid=6092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:06.722389 sshd[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:06.736792 kernel: audit: type=1103 audit(1707506286.720:502): pid=6092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:06.736944 kernel: audit: type=1006 audit(1707506286.720:503): pid=6092 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Feb 9 19:18:06.720000 audit[6092]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdbef81f0 a2=3 a3=1 items=0 ppid=1 pid=6092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:18:06.747407 kernel: audit: type=1300 audit(1707506286.720:503): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdbef81f0 a2=3 a3=1 items=0 ppid=1 pid=6092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:18:06.720000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:18:06.751440 kernel: audit: type=1327 audit(1707506286.720:503): proctitle=737368643A20636F7265205B707269765D Feb 9 19:18:06.757865 systemd-logind[1733]: New session 27 of user core. Feb 9 19:18:06.759410 systemd[1]: Started session-27.scope. Feb 9 19:18:06.774000 audit[6092]: USER_START pid=6092 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:06.788000 audit[6095]: CRED_ACQ pid=6095 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:06.798665 kernel: audit: type=1105 audit(1707506286.774:504): pid=6092 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:06.798928 kernel: audit: type=1103 audit(1707506286.788:505): pid=6095 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:07.148145 sshd[6092]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:07.149000 audit[6092]: USER_END pid=6092 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:07.153288 systemd-logind[1733]: Session 27 logged out. Waiting for processes to exit. Feb 9 19:18:07.156121 systemd[1]: sshd@26-172.31.18.155:22-147.75.109.163:43616.service: Deactivated successfully. Feb 9 19:18:07.157701 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 19:18:07.161746 systemd-logind[1733]: Removed session 27. Feb 9 19:18:07.149000 audit[6092]: CRED_DISP pid=6092 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:07.173426 kernel: audit: type=1106 audit(1707506287.149:506): pid=6092 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:07.173593 kernel: audit: type=1104 audit(1707506287.149:507): pid=6092 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:07.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.18.155:22-147.75.109.163:43616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:18:12.178367 systemd[1]: Started sshd@27-172.31.18.155:22-147.75.109.163:43624.service. Feb 9 19:18:12.193916 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:18:12.194116 kernel: audit: type=1130 audit(1707506292.180:509): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.18.155:22-147.75.109.163:43624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:18:12.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.18.155:22-147.75.109.163:43624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:18:12.382000 audit[6116]: USER_ACCT pid=6116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:12.387838 sshd[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:12.392356 sshd[6116]: Accepted publickey for core from 147.75.109.163 port 43624 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:18:12.384000 audit[6116]: CRED_ACQ pid=6116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:12.404876 kernel: audit: type=1101 audit(1707506292.382:510): pid=6116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:12.404996 kernel: audit: type=1103 audit(1707506292.384:511): pid=6116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:12.414280 kernel: audit: type=1006 audit(1707506292.385:512): pid=6116 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Feb 9 19:18:12.414407 kernel: audit: type=1300 audit(1707506292.385:512): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeec2f870 a2=3 a3=1 items=0 ppid=1 pid=6116 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:18:12.385000 audit[6116]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeec2f870 a2=3 a3=1 items=0 ppid=1 pid=6116 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:18:12.410457 systemd[1]: Started session-28.scope. Feb 9 19:18:12.413439 systemd-logind[1733]: New session 28 of user core. Feb 9 19:18:12.434148 kernel: audit: type=1327 audit(1707506292.385:512): proctitle=737368643A20636F7265205B707269765D Feb 9 19:18:12.385000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:18:12.422000 audit[6116]: USER_START pid=6116 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:12.445932 kernel: audit: type=1105 audit(1707506292.422:513): pid=6116 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:12.425000 audit[6120]: CRED_ACQ pid=6120 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:12.456824 kernel: audit: type=1103 audit(1707506292.425:514): pid=6120 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:12.727501 sshd[6116]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:12.728000 audit[6116]: USER_END pid=6116 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:12.744455 systemd-logind[1733]: Session 28 logged out. Waiting for processes to exit. Feb 9 19:18:12.745055 systemd[1]: sshd@27-172.31.18.155:22-147.75.109.163:43624.service: Deactivated successfully. Feb 9 19:18:12.740000 audit[6116]: CRED_DISP pid=6116 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:12.746796 kernel: audit: type=1106 audit(1707506292.728:515): pid=6116 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:18:12.747121 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 19:18:12.757561 systemd-logind[1733]: Removed session 28. Feb 9 19:18:12.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.18.155:22-147.75.109.163:43624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:18:12.760818 kernel: audit: type=1104 audit(1707506292.740:516): pid=6116 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success'