Dec 13 14:13:47.978057 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 14:13:47.978093 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 14:13:47.978115 kernel: efi: EFI v2.70 by EDK II Dec 13 14:13:47.978130 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7171cf98 Dec 13 14:13:47.978143 kernel: ACPI: Early table checksum verification disabled Dec 13 14:13:47.978157 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 14:13:47.978172 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 14:13:47.978187 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 14:13:47.978200 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 14:13:47.978214 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 14:13:47.978232 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 14:13:47.978246 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 14:13:47.978259 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 14:13:47.978273 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 14:13:47.978290 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 14:13:47.978328 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 14:13:47.978344 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 14:13:47.978359 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 14:13:47.978374 kernel: printk: bootconsole [uart0] enabled Dec 13 14:13:47.978388 kernel: NUMA: Failed to initialise from firmware Dec 13 14:13:47.978403 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 14:13:47.978418 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Dec 13 14:13:47.978432 kernel: Zone ranges: Dec 13 14:13:47.978447 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 14:13:47.978461 kernel: DMA32 empty Dec 13 14:13:47.978475 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 14:13:47.978494 kernel: Movable zone start for each node Dec 13 14:13:47.978509 kernel: Early memory node ranges Dec 13 14:13:47.978524 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 14:13:47.978538 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 14:13:47.978565 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 14:13:47.978581 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 14:13:47.978597 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 14:13:47.978611 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 14:13:47.981287 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 14:13:47.981306 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 14:13:47.981322 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 14:13:47.981337 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 14:13:47.981361 kernel: psci: probing for conduit method from ACPI. Dec 13 14:13:47.981376 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 14:13:47.981398 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:13:47.981414 kernel: psci: Trusted OS migration not required Dec 13 14:13:47.981429 kernel: psci: SMC Calling Convention v1.1 Dec 13 14:13:47.981449 kernel: ACPI: SRAT not present Dec 13 14:13:47.981465 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 14:13:47.981480 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 14:13:47.981496 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 14:13:47.981511 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:13:47.981527 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:13:47.981542 kernel: CPU features: detected: Spectre-v2 Dec 13 14:13:47.981557 kernel: CPU features: detected: Spectre-v3a Dec 13 14:13:47.981572 kernel: CPU features: detected: Spectre-BHB Dec 13 14:13:47.981588 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:13:47.981603 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:13:47.981662 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 14:13:47.981680 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 14:13:47.981696 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 14:13:47.981711 kernel: Policy zone: Normal Dec 13 14:13:47.981729 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:13:47.981746 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:13:47.981761 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:13:47.981777 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:13:47.981792 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:13:47.981808 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 14:13:47.981829 kernel: Memory: 3824524K/4030464K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 205940K reserved, 0K cma-reserved) Dec 13 14:13:47.981845 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:13:47.981860 kernel: trace event string verifier disabled Dec 13 14:13:47.981875 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:13:47.981891 kernel: rcu: RCU event tracing is enabled. Dec 13 14:13:47.981906 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:13:47.981922 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:13:47.981938 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:13:47.981953 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:13:47.981968 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:13:47.981983 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:13:47.981998 kernel: GICv3: 96 SPIs implemented Dec 13 14:13:47.982017 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:13:47.982032 kernel: GICv3: Distributor has no Range Selector support Dec 13 14:13:47.982048 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:13:47.982063 kernel: GICv3: 16 PPIs implemented Dec 13 14:13:47.982078 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 14:13:47.982093 kernel: ACPI: SRAT not present Dec 13 14:13:47.982107 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 14:13:47.982123 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:13:47.982138 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Dec 13 14:13:47.982153 kernel: GICv3: using LPI property table @0x00000004000b0000 Dec 13 14:13:47.982169 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 14:13:47.982188 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Dec 13 14:13:47.982203 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 14:13:47.982219 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 14:13:47.982234 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 14:13:47.982249 kernel: Console: colour dummy device 80x25 Dec 13 14:13:47.982265 kernel: printk: console [tty1] enabled Dec 13 14:13:47.982281 kernel: ACPI: Core revision 20210730 Dec 13 14:13:47.982296 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 14:13:47.982331 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:13:47.982348 kernel: LSM: Security Framework initializing Dec 13 14:13:47.982368 kernel: SELinux: Initializing. Dec 13 14:13:47.982384 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:13:47.982400 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:13:47.982415 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:13:47.982431 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 14:13:47.982446 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 14:13:47.982461 kernel: Remapping and enabling EFI services. Dec 13 14:13:47.982477 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:13:47.982492 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:13:47.982512 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 14:13:47.982528 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Dec 13 14:13:47.982544 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 14:13:47.982559 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:13:47.982574 kernel: SMP: Total of 2 processors activated. Dec 13 14:13:47.982590 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:13:47.982605 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 14:13:47.984671 kernel: CPU features: detected: CRC32 instructions Dec 13 14:13:47.984695 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:13:47.984712 kernel: alternatives: patching kernel code Dec 13 14:13:47.984735 kernel: devtmpfs: initialized Dec 13 14:13:47.984751 kernel: KASLR disabled due to lack of seed Dec 13 14:13:47.984778 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:13:47.984799 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:13:47.984816 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:13:47.984832 kernel: SMBIOS 3.0.0 present. Dec 13 14:13:47.984848 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 14:13:47.984864 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:13:47.984881 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:13:47.984897 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:13:47.984915 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:13:47.984941 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:13:47.984959 kernel: audit: type=2000 audit(0.252:1): state=initialized audit_enabled=0 res=1 Dec 13 14:13:47.984975 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:13:47.984991 kernel: cpuidle: using governor menu Dec 13 14:13:47.985008 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:13:47.985028 kernel: ASID allocator initialised with 32768 entries Dec 13 14:13:47.985045 kernel: ACPI: bus type PCI registered Dec 13 14:13:47.985061 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:13:47.985077 kernel: Serial: AMBA PL011 UART driver Dec 13 14:13:47.985093 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:13:47.985109 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:13:47.985126 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:13:47.985142 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:13:47.985158 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:13:47.985179 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:13:47.985196 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:13:47.985212 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:13:47.985228 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:13:47.985244 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:13:47.985260 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:13:47.985276 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:13:47.985292 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:13:47.985308 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:13:47.985327 kernel: ACPI: Interpreter enabled Dec 13 14:13:47.985344 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:13:47.985359 kernel: ACPI: MCFG table detected, 1 entries Dec 13 14:13:47.985376 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 14:13:47.985740 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:13:47.987536 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:13:47.987767 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:13:47.987956 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 14:13:47.988150 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 14:13:47.988173 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 14:13:47.988190 kernel: acpiphp: Slot [1] registered Dec 13 14:13:47.988207 kernel: acpiphp: Slot [2] registered Dec 13 14:13:47.988223 kernel: acpiphp: Slot [3] registered Dec 13 14:13:47.988239 kernel: acpiphp: Slot [4] registered Dec 13 14:13:47.988255 kernel: acpiphp: Slot [5] registered Dec 13 14:13:47.988271 kernel: acpiphp: Slot [6] registered Dec 13 14:13:47.988287 kernel: acpiphp: Slot [7] registered Dec 13 14:13:47.988307 kernel: acpiphp: Slot [8] registered Dec 13 14:13:47.988323 kernel: acpiphp: Slot [9] registered Dec 13 14:13:47.988339 kernel: acpiphp: Slot [10] registered Dec 13 14:13:47.988355 kernel: acpiphp: Slot [11] registered Dec 13 14:13:47.988371 kernel: acpiphp: Slot [12] registered Dec 13 14:13:47.988387 kernel: acpiphp: Slot [13] registered Dec 13 14:13:47.988403 kernel: acpiphp: Slot [14] registered Dec 13 14:13:47.988419 kernel: acpiphp: Slot [15] registered Dec 13 14:13:47.988435 kernel: acpiphp: Slot [16] registered Dec 13 14:13:47.988455 kernel: acpiphp: Slot [17] registered Dec 13 14:13:47.988471 kernel: acpiphp: Slot [18] registered Dec 13 14:13:47.988487 kernel: acpiphp: Slot [19] registered Dec 13 14:13:47.988503 kernel: acpiphp: Slot [20] registered Dec 13 14:13:47.988519 kernel: acpiphp: Slot [21] registered Dec 13 14:13:47.988535 kernel: acpiphp: Slot [22] registered Dec 13 14:13:47.988551 kernel: acpiphp: Slot [23] registered Dec 13 14:13:47.988567 kernel: acpiphp: Slot [24] registered Dec 13 14:13:47.988584 kernel: acpiphp: Slot [25] registered Dec 13 14:13:47.988599 kernel: acpiphp: Slot [26] registered Dec 13 14:13:47.988636 kernel: acpiphp: Slot [27] registered Dec 13 14:13:47.988656 kernel: acpiphp: Slot [28] registered Dec 13 14:13:47.988673 kernel: acpiphp: Slot [29] registered Dec 13 14:13:47.988689 kernel: acpiphp: Slot [30] registered Dec 13 14:13:47.988705 kernel: acpiphp: Slot [31] registered Dec 13 14:13:47.988721 kernel: PCI host bridge to bus 0000:00 Dec 13 14:13:47.988914 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 14:13:47.989086 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 14:13:47.989260 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 14:13:47.989428 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 14:13:47.989656 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 14:13:47.989884 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 14:13:47.990082 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 14:13:47.990290 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 14:13:47.990513 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 14:13:47.990727 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:13:47.990997 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 14:13:47.991201 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 14:13:47.991427 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 14:13:47.991636 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 14:13:47.991854 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:13:47.992073 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 14:13:47.992270 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 14:13:47.992457 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 14:13:47.992667 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 14:13:47.992868 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 14:13:47.993045 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 14:13:47.993218 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 14:13:47.993396 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 14:13:47.993419 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 14:13:47.993436 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 14:13:47.993452 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 14:13:47.993468 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 14:13:47.993485 kernel: iommu: Default domain type: Translated Dec 13 14:13:47.993501 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:13:47.993517 kernel: vgaarb: loaded Dec 13 14:13:47.993534 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:13:47.993555 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:13:47.993571 kernel: PTP clock support registered Dec 13 14:13:47.993587 kernel: Registered efivars operations Dec 13 14:13:47.993603 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:13:48.012688 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:13:48.012729 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:13:48.012748 kernel: pnp: PnP ACPI init Dec 13 14:13:48.013010 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 14:13:48.013048 kernel: pnp: PnP ACPI: found 1 devices Dec 13 14:13:48.013066 kernel: NET: Registered PF_INET protocol family Dec 13 14:13:48.013083 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:13:48.013100 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:13:48.013116 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:13:48.013133 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:13:48.013152 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:13:48.013172 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:13:48.013188 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:13:48.013209 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:13:48.013226 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:13:48.013243 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:13:48.013259 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 14:13:48.013276 kernel: kvm [1]: HYP mode not available Dec 13 14:13:48.013292 kernel: Initialise system trusted keyrings Dec 13 14:13:48.013309 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:13:48.013325 kernel: Key type asymmetric registered Dec 13 14:13:48.013341 kernel: Asymmetric key parser 'x509' registered Dec 13 14:13:48.013362 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:13:48.013379 kernel: io scheduler mq-deadline registered Dec 13 14:13:48.013395 kernel: io scheduler kyber registered Dec 13 14:13:48.013412 kernel: io scheduler bfq registered Dec 13 14:13:48.023653 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 14:13:48.023702 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 14:13:48.023721 kernel: ACPI: button: Power Button [PWRB] Dec 13 14:13:48.023738 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 14:13:48.023763 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 14:13:48.023780 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:13:48.023798 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 14:13:48.024028 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 14:13:48.024053 kernel: printk: console [ttyS0] disabled Dec 13 14:13:48.024070 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 14:13:48.024087 kernel: printk: console [ttyS0] enabled Dec 13 14:13:48.024103 kernel: printk: bootconsole [uart0] disabled Dec 13 14:13:48.024119 kernel: thunder_xcv, ver 1.0 Dec 13 14:13:48.024140 kernel: thunder_bgx, ver 1.0 Dec 13 14:13:48.024156 kernel: nicpf, ver 1.0 Dec 13 14:13:48.024172 kernel: nicvf, ver 1.0 Dec 13 14:13:48.024380 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:13:48.024561 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:13:47 UTC (1734099227) Dec 13 14:13:48.024585 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:13:48.024602 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:13:48.024644 kernel: Segment Routing with IPv6 Dec 13 14:13:48.024666 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:13:48.024689 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:13:48.024706 kernel: Key type dns_resolver registered Dec 13 14:13:48.024738 kernel: registered taskstats version 1 Dec 13 14:13:48.024756 kernel: Loading compiled-in X.509 certificates Dec 13 14:13:48.024772 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 14:13:48.024789 kernel: Key type .fscrypt registered Dec 13 14:13:48.024805 kernel: Key type fscrypt-provisioning registered Dec 13 14:13:48.024822 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:13:48.024838 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:13:48.024859 kernel: ima: No architecture policies found Dec 13 14:13:48.024875 kernel: clk: Disabling unused clocks Dec 13 14:13:48.024891 kernel: Freeing unused kernel memory: 36416K Dec 13 14:13:48.024907 kernel: Run /init as init process Dec 13 14:13:48.024923 kernel: with arguments: Dec 13 14:13:48.024939 kernel: /init Dec 13 14:13:48.024954 kernel: with environment: Dec 13 14:13:48.024970 kernel: HOME=/ Dec 13 14:13:48.024985 kernel: TERM=linux Dec 13 14:13:48.025005 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:13:48.025028 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:13:48.025049 systemd[1]: Detected virtualization amazon. Dec 13 14:13:48.025067 systemd[1]: Detected architecture arm64. Dec 13 14:13:48.025084 systemd[1]: Running in initrd. Dec 13 14:13:48.025101 systemd[1]: No hostname configured, using default hostname. Dec 13 14:13:48.025118 systemd[1]: Hostname set to . Dec 13 14:13:48.025140 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:13:48.025158 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:13:48.025175 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:13:48.025192 systemd[1]: Reached target cryptsetup.target. Dec 13 14:13:48.025209 systemd[1]: Reached target paths.target. Dec 13 14:13:48.025226 systemd[1]: Reached target slices.target. Dec 13 14:13:48.025243 systemd[1]: Reached target swap.target. Dec 13 14:13:48.025261 systemd[1]: Reached target timers.target. Dec 13 14:13:48.025282 systemd[1]: Listening on iscsid.socket. Dec 13 14:13:48.025300 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:13:48.025318 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:13:48.025335 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:13:48.025353 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:13:48.025370 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:13:48.025388 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:13:48.025406 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:13:48.025427 systemd[1]: Reached target sockets.target. Dec 13 14:13:48.025445 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:13:48.025463 systemd[1]: Finished network-cleanup.service. Dec 13 14:13:48.025480 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:13:48.025497 systemd[1]: Starting systemd-journald.service... Dec 13 14:13:48.025514 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:13:48.025532 systemd[1]: Starting systemd-resolved.service... Dec 13 14:13:48.025549 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:13:48.025567 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:13:48.025587 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:13:48.025606 kernel: audit: type=1130 audit(1734099227.972:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.025645 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:13:48.025665 kernel: audit: type=1130 audit(1734099227.985:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.025683 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:13:48.025700 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:13:48.025721 systemd-journald[309]: Journal started Dec 13 14:13:48.025815 systemd-journald[309]: Runtime Journal (/run/log/journal/ec28010aef86db6f0cbbeafc796a3d32) is 8.0M, max 75.4M, 67.4M free. Dec 13 14:13:47.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:47.973216 systemd-modules-load[310]: Inserted module 'overlay' Dec 13 14:13:48.048771 systemd[1]: Started systemd-journald.service. Dec 13 14:13:48.048819 kernel: audit: type=1130 audit(1734099228.029:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.048845 kernel: audit: type=1130 audit(1734099228.030:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.031543 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:13:48.064734 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:13:48.069644 kernel: Bridge firewalling registered Dec 13 14:13:48.069322 systemd-modules-load[310]: Inserted module 'br_netfilter' Dec 13 14:13:48.081809 systemd-resolved[311]: Positive Trust Anchors: Dec 13 14:13:48.081836 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:13:48.081892 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:13:48.131046 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:13:48.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.142538 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:13:48.145683 kernel: audit: type=1130 audit(1734099228.131:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.153814 kernel: SCSI subsystem initialized Dec 13 14:13:48.170206 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:13:48.170284 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:13:48.177650 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:13:48.183260 dracut-cmdline[327]: dracut-dracut-053 Dec 13 14:13:48.188022 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:13:48.187732 systemd-modules-load[310]: Inserted module 'dm_multipath' Dec 13 14:13:48.191102 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:13:48.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.216411 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:13:48.223680 kernel: audit: type=1130 audit(1734099228.206:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.238363 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:13:48.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.249662 kernel: audit: type=1130 audit(1734099228.238:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.341662 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:13:48.362663 kernel: iscsi: registered transport (tcp) Dec 13 14:13:48.389915 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:13:48.389998 kernel: QLogic iSCSI HBA Driver Dec 13 14:13:48.530388 systemd-resolved[311]: Defaulting to hostname 'linux'. Dec 13 14:13:48.532644 kernel: random: crng init done Dec 13 14:13:48.534938 systemd[1]: Started systemd-resolved.service. Dec 13 14:13:48.547004 kernel: audit: type=1130 audit(1734099228.535:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.536881 systemd[1]: Reached target nss-lookup.target. Dec 13 14:13:48.569113 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:13:48.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.579156 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:13:48.581525 kernel: audit: type=1130 audit(1734099228.567:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:48.645668 kernel: raid6: neonx8 gen() 6416 MB/s Dec 13 14:13:48.663657 kernel: raid6: neonx8 xor() 4526 MB/s Dec 13 14:13:48.681666 kernel: raid6: neonx4 gen() 6549 MB/s Dec 13 14:13:48.699660 kernel: raid6: neonx4 xor() 4720 MB/s Dec 13 14:13:48.717654 kernel: raid6: neonx2 gen() 5813 MB/s Dec 13 14:13:48.735661 kernel: raid6: neonx2 xor() 4375 MB/s Dec 13 14:13:48.753661 kernel: raid6: neonx1 gen() 4492 MB/s Dec 13 14:13:48.771654 kernel: raid6: neonx1 xor() 3569 MB/s Dec 13 14:13:48.789653 kernel: raid6: int64x8 gen() 3435 MB/s Dec 13 14:13:48.807656 kernel: raid6: int64x8 xor() 2042 MB/s Dec 13 14:13:48.825652 kernel: raid6: int64x4 gen() 3843 MB/s Dec 13 14:13:48.843657 kernel: raid6: int64x4 xor() 2162 MB/s Dec 13 14:13:48.861654 kernel: raid6: int64x2 gen() 3603 MB/s Dec 13 14:13:48.879653 kernel: raid6: int64x2 xor() 1921 MB/s Dec 13 14:13:48.897658 kernel: raid6: int64x1 gen() 2758 MB/s Dec 13 14:13:48.916856 kernel: raid6: int64x1 xor() 1432 MB/s Dec 13 14:13:48.916897 kernel: raid6: using algorithm neonx4 gen() 6549 MB/s Dec 13 14:13:48.916920 kernel: raid6: .... xor() 4720 MB/s, rmw enabled Dec 13 14:13:48.918509 kernel: raid6: using neon recovery algorithm Dec 13 14:13:48.936659 kernel: xor: measuring software checksum speed Dec 13 14:13:48.936723 kernel: 8regs : 8775 MB/sec Dec 13 14:13:48.939902 kernel: 32regs : 10710 MB/sec Dec 13 14:13:48.941721 kernel: arm64_neon : 9388 MB/sec Dec 13 14:13:48.941755 kernel: xor: using function: 32regs (10710 MB/sec) Dec 13 14:13:49.034666 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 14:13:49.051676 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:13:49.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:49.053000 audit: BPF prog-id=7 op=LOAD Dec 13 14:13:49.053000 audit: BPF prog-id=8 op=LOAD Dec 13 14:13:49.055798 systemd[1]: Starting systemd-udevd.service... Dec 13 14:13:49.086181 systemd-udevd[510]: Using default interface naming scheme 'v252'. Dec 13 14:13:49.097719 systemd[1]: Started systemd-udevd.service. Dec 13 14:13:49.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:49.102246 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:13:49.133157 dracut-pre-trigger[517]: rd.md=0: removing MD RAID activation Dec 13 14:13:49.191018 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:13:49.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:49.197479 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:13:49.297461 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:13:49.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:49.429743 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 14:13:49.429822 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 14:13:49.446352 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 14:13:49.446596 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 14:13:49.446848 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:2b:c7:53:ad:bb Dec 13 14:13:49.447056 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 14:13:49.450151 (udev-worker)[560]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:13:49.452813 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 14:13:49.461660 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 14:13:49.469916 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:13:49.469983 kernel: GPT:9289727 != 16777215 Dec 13 14:13:49.472046 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:13:49.473279 kernel: GPT:9289727 != 16777215 Dec 13 14:13:49.475060 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:13:49.476536 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:49.562659 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (562) Dec 13 14:13:49.590881 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:13:49.643598 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:13:49.662445 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:13:49.707069 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:13:49.709701 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:13:49.715469 systemd[1]: Starting disk-uuid.service... Dec 13 14:13:49.731653 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:49.733748 disk-uuid[666]: Primary Header is updated. Dec 13 14:13:49.733748 disk-uuid[666]: Secondary Entries is updated. Dec 13 14:13:49.733748 disk-uuid[666]: Secondary Header is updated. Dec 13 14:13:49.767664 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:50.756597 disk-uuid[667]: The operation has completed successfully. Dec 13 14:13:50.760720 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:50.922001 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:13:50.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:50.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:50.922202 systemd[1]: Finished disk-uuid.service. Dec 13 14:13:50.947723 systemd[1]: Starting verity-setup.service... Dec 13 14:13:50.972662 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:13:51.064429 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:13:51.068194 systemd[1]: Finished verity-setup.service. Dec 13 14:13:51.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.072514 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:13:51.158660 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:13:51.159416 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:13:51.159775 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:13:51.161027 systemd[1]: Starting ignition-setup.service... Dec 13 14:13:51.172519 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:13:51.193740 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:13:51.193806 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:13:51.193830 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:13:51.207671 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:13:51.224283 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:13:51.258982 systemd[1]: Finished ignition-setup.service. Dec 13 14:13:51.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.263201 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:13:51.315320 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:13:51.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.317000 audit: BPF prog-id=9 op=LOAD Dec 13 14:13:51.320086 systemd[1]: Starting systemd-networkd.service... Dec 13 14:13:51.365414 systemd-networkd[1095]: lo: Link UP Dec 13 14:13:51.365437 systemd-networkd[1095]: lo: Gained carrier Dec 13 14:13:51.368968 systemd-networkd[1095]: Enumeration completed Dec 13 14:13:51.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.369428 systemd-networkd[1095]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:13:51.369659 systemd[1]: Started systemd-networkd.service. Dec 13 14:13:51.371413 systemd[1]: Reached target network.target. Dec 13 14:13:51.376819 systemd[1]: Starting iscsiuio.service... Dec 13 14:13:51.387483 systemd-networkd[1095]: eth0: Link UP Dec 13 14:13:51.387687 systemd-networkd[1095]: eth0: Gained carrier Dec 13 14:13:51.389916 systemd[1]: Started iscsiuio.service. Dec 13 14:13:51.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.395223 systemd[1]: Starting iscsid.service... Dec 13 14:13:51.403448 iscsid[1100]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:13:51.403448 iscsid[1100]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:13:51.403448 iscsid[1100]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:13:51.403448 iscsid[1100]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:13:51.403448 iscsid[1100]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:13:51.421896 iscsid[1100]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:13:51.421972 systemd[1]: Started iscsid.service. Dec 13 14:13:51.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.428079 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:13:51.429891 systemd-networkd[1095]: eth0: DHCPv4 address 172.31.26.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:13:51.452776 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:13:51.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.456240 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:13:51.459243 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:13:51.474421 systemd[1]: Reached target remote-fs.target. Dec 13 14:13:51.480053 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:13:51.497487 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:13:51.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.145607 ignition[1052]: Ignition 2.14.0 Dec 13 14:13:52.146130 ignition[1052]: Stage: fetch-offline Dec 13 14:13:52.146453 ignition[1052]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:52.146540 ignition[1052]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:52.164315 ignition[1052]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:52.166893 ignition[1052]: Ignition finished successfully Dec 13 14:13:52.180549 kernel: kauditd_printk_skb: 17 callbacks suppressed Dec 13 14:13:52.180909 kernel: audit: type=1130 audit(1734099232.170:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.169524 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:13:52.173196 systemd[1]: Starting ignition-fetch.service... Dec 13 14:13:52.190566 ignition[1119]: Ignition 2.14.0 Dec 13 14:13:52.190596 ignition[1119]: Stage: fetch Dec 13 14:13:52.190926 ignition[1119]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:52.190984 ignition[1119]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:52.207079 ignition[1119]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:52.209243 ignition[1119]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:52.226701 ignition[1119]: INFO : PUT result: OK Dec 13 14:13:52.229601 ignition[1119]: DEBUG : parsed url from cmdline: "" Dec 13 14:13:52.229601 ignition[1119]: INFO : no config URL provided Dec 13 14:13:52.229601 ignition[1119]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:13:52.235049 ignition[1119]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 14:13:52.235049 ignition[1119]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:52.235049 ignition[1119]: INFO : PUT result: OK Dec 13 14:13:52.240555 ignition[1119]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 14:13:52.243285 ignition[1119]: INFO : GET result: OK Dec 13 14:13:52.244762 ignition[1119]: DEBUG : parsing config with SHA512: 9c2ccce6b5c84c915797a0268904903a2c89fba0abf4117b7ae918f23d37320b3114fddad59442ef840d4f02e3fd2c160d6e559545d1824d4ab43e169bc7cacc Dec 13 14:13:52.251604 unknown[1119]: fetched base config from "system" Dec 13 14:13:52.251670 unknown[1119]: fetched base config from "system" Dec 13 14:13:52.251688 unknown[1119]: fetched user config from "aws" Dec 13 14:13:52.258255 ignition[1119]: fetch: fetch complete Dec 13 14:13:52.258298 ignition[1119]: fetch: fetch passed Dec 13 14:13:52.258846 ignition[1119]: Ignition finished successfully Dec 13 14:13:52.264776 systemd[1]: Finished ignition-fetch.service. Dec 13 14:13:52.274484 kernel: audit: type=1130 audit(1734099232.265:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.274034 systemd[1]: Starting ignition-kargs.service... Dec 13 14:13:52.291552 ignition[1125]: Ignition 2.14.0 Dec 13 14:13:52.292058 ignition[1125]: Stage: kargs Dec 13 14:13:52.292368 ignition[1125]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:52.292428 ignition[1125]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:52.308288 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:52.310486 ignition[1125]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:52.313514 ignition[1125]: INFO : PUT result: OK Dec 13 14:13:52.318413 ignition[1125]: kargs: kargs passed Dec 13 14:13:52.318525 ignition[1125]: Ignition finished successfully Dec 13 14:13:52.322667 systemd[1]: Finished ignition-kargs.service. Dec 13 14:13:52.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.326681 systemd[1]: Starting ignition-disks.service... Dec 13 14:13:52.336645 kernel: audit: type=1130 audit(1734099232.323:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.341360 ignition[1131]: Ignition 2.14.0 Dec 13 14:13:52.341386 ignition[1131]: Stage: disks Dec 13 14:13:52.341715 ignition[1131]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:52.341774 ignition[1131]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:52.354810 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:52.356965 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:52.360025 ignition[1131]: INFO : PUT result: OK Dec 13 14:13:52.365041 ignition[1131]: disks: disks passed Dec 13 14:13:52.365156 ignition[1131]: Ignition finished successfully Dec 13 14:13:52.369344 systemd[1]: Finished ignition-disks.service. Dec 13 14:13:52.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.372329 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:13:52.381695 kernel: audit: type=1130 audit(1734099232.370:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.381863 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:13:52.384983 systemd[1]: Reached target local-fs.target. Dec 13 14:13:52.387855 systemd[1]: Reached target sysinit.target. Dec 13 14:13:52.390702 systemd[1]: Reached target basic.target. Dec 13 14:13:52.394873 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:13:52.430746 systemd-fsck[1139]: ROOT: clean, 621/553520 files, 56020/553472 blocks Dec 13 14:13:52.443983 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:13:52.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.448032 systemd[1]: Mounting sysroot.mount... Dec 13 14:13:52.457218 kernel: audit: type=1130 audit(1734099232.444:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.466663 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:13:52.468799 systemd[1]: Mounted sysroot.mount. Dec 13 14:13:52.470890 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:13:52.486245 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:13:52.488486 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:13:52.488565 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:13:52.488640 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:13:52.497098 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:13:52.523343 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:13:52.528209 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:13:52.546657 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1156) Dec 13 14:13:52.547757 initrd-setup-root[1161]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:13:52.556071 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:13:52.556140 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:13:52.556172 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:13:52.564654 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:13:52.568516 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:13:52.573738 initrd-setup-root[1187]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:13:52.583274 initrd-setup-root[1195]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:13:52.592694 initrd-setup-root[1203]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:13:52.807965 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:13:52.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.812294 systemd[1]: Starting ignition-mount.service... Dec 13 14:13:52.821327 kernel: audit: type=1130 audit(1734099232.809:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.822567 systemd[1]: Starting sysroot-boot.service... Dec 13 14:13:52.833359 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:13:52.833527 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:13:52.854825 systemd-networkd[1095]: eth0: Gained IPv6LL Dec 13 14:13:52.869070 ignition[1222]: INFO : Ignition 2.14.0 Dec 13 14:13:52.870990 ignition[1222]: INFO : Stage: mount Dec 13 14:13:52.871780 systemd[1]: Finished sysroot-boot.service. Dec 13 14:13:52.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.881649 kernel: audit: type=1130 audit(1734099232.872:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.882098 ignition[1222]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:52.884515 ignition[1222]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:52.899270 ignition[1222]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:52.901614 ignition[1222]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:52.903920 ignition[1222]: INFO : PUT result: OK Dec 13 14:13:52.909386 ignition[1222]: INFO : mount: mount passed Dec 13 14:13:52.910954 ignition[1222]: INFO : Ignition finished successfully Dec 13 14:13:52.914003 systemd[1]: Finished ignition-mount.service. Dec 13 14:13:52.926431 kernel: audit: type=1130 audit(1734099232.914:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.917060 systemd[1]: Starting ignition-files.service... Dec 13 14:13:52.933707 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:13:52.953893 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1231) Dec 13 14:13:52.958943 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:13:52.959007 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:13:52.959033 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:13:52.967657 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:13:52.972175 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:13:52.991356 ignition[1250]: INFO : Ignition 2.14.0 Dec 13 14:13:52.991356 ignition[1250]: INFO : Stage: files Dec 13 14:13:52.994508 ignition[1250]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:52.994508 ignition[1250]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:53.011317 ignition[1250]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:53.011317 ignition[1250]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:53.016469 ignition[1250]: INFO : PUT result: OK Dec 13 14:13:53.021291 ignition[1250]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:13:53.025669 ignition[1250]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:13:53.028237 ignition[1250]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:13:53.046215 ignition[1250]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:13:53.048992 ignition[1250]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:13:53.051901 unknown[1250]: wrote ssh authorized keys file for user: core Dec 13 14:13:53.056804 ignition[1250]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:13:53.066183 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:13:53.066183 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:13:53.066183 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:13:53.066183 ignition[1250]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 14:13:53.140370 ignition[1250]: INFO : GET result: OK Dec 13 14:13:53.304250 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:13:53.307807 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:13:53.307807 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:13:53.307807 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:53.319997 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:53.319997 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:13:53.319997 ignition[1250]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:53.333209 ignition[1250]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2349075162" Dec 13 14:13:53.339509 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1253) Dec 13 14:13:53.339547 ignition[1250]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2349075162": device or resource busy Dec 13 14:13:53.339547 ignition[1250]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2349075162", trying btrfs: device or resource busy Dec 13 14:13:53.339547 ignition[1250]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2349075162" Dec 13 14:13:53.339547 ignition[1250]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2349075162" Dec 13 14:13:53.365257 ignition[1250]: INFO : op(3): [started] unmounting "/mnt/oem2349075162" Dec 13 14:13:53.367576 ignition[1250]: INFO : op(3): [finished] unmounting "/mnt/oem2349075162" Dec 13 14:13:53.367576 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:13:53.367576 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:13:53.376191 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:13:53.376191 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:13:53.376191 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:13:53.376191 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:13:53.388868 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:13:53.392147 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:13:53.395367 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:13:53.398922 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:13:53.402348 ignition[1250]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:53.415946 ignition[1250]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3109638528" Dec 13 14:13:53.418724 ignition[1250]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3109638528": device or resource busy Dec 13 14:13:53.418724 ignition[1250]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3109638528", trying btrfs: device or resource busy Dec 13 14:13:53.418724 ignition[1250]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3109638528" Dec 13 14:13:53.428787 ignition[1250]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3109638528" Dec 13 14:13:53.434511 ignition[1250]: INFO : op(6): [started] unmounting "/mnt/oem3109638528" Dec 13 14:13:53.436689 ignition[1250]: INFO : op(6): [finished] unmounting "/mnt/oem3109638528" Dec 13 14:13:53.436689 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:13:53.436689 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:53.448953 ignition[1250]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 14:13:53.835446 ignition[1250]: INFO : GET result: OK Dec 13 14:13:54.444709 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:54.449505 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:13:54.449505 ignition[1250]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:54.468122 ignition[1250]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3811559107" Dec 13 14:13:54.471030 ignition[1250]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3811559107": device or resource busy Dec 13 14:13:54.474128 ignition[1250]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3811559107", trying btrfs: device or resource busy Dec 13 14:13:54.474128 ignition[1250]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3811559107" Dec 13 14:13:54.480215 ignition[1250]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3811559107" Dec 13 14:13:54.480215 ignition[1250]: INFO : op(9): [started] unmounting "/mnt/oem3811559107" Dec 13 14:13:54.485455 ignition[1250]: INFO : op(9): [finished] unmounting "/mnt/oem3811559107" Dec 13 14:13:54.485455 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:13:54.485455 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:13:54.485455 ignition[1250]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:54.508995 ignition[1250]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836261848" Dec 13 14:13:54.508995 ignition[1250]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836261848": device or resource busy Dec 13 14:13:54.508995 ignition[1250]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3836261848", trying btrfs: device or resource busy Dec 13 14:13:54.508995 ignition[1250]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836261848" Dec 13 14:13:54.508995 ignition[1250]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836261848" Dec 13 14:13:54.523240 ignition[1250]: INFO : op(c): [started] unmounting "/mnt/oem3836261848" Dec 13 14:13:54.523240 ignition[1250]: INFO : op(c): [finished] unmounting "/mnt/oem3836261848" Dec 13 14:13:54.523240 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:13:54.523240 ignition[1250]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:13:54.523240 ignition[1250]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:13:54.523240 ignition[1250]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Dec 13 14:13:54.523240 ignition[1250]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:13:54.541586 ignition[1250]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:13:54.541586 ignition[1250]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Dec 13 14:13:54.541586 ignition[1250]: INFO : files: op(13): [started] processing unit "nvidia.service" Dec 13 14:13:54.541586 ignition[1250]: INFO : files: op(13): [finished] processing unit "nvidia.service" Dec 13 14:13:54.541586 ignition[1250]: INFO : files: op(14): [started] processing unit "containerd.service" Dec 13 14:13:54.541586 ignition[1250]: INFO : files: op(14): op(15): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:13:54.558561 ignition[1250]: INFO : files: op(14): op(15): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:13:54.558561 ignition[1250]: INFO : files: op(14): [finished] processing unit "containerd.service" Dec 13 14:13:54.558561 ignition[1250]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Dec 13 14:13:54.558561 ignition[1250]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:13:54.558561 ignition[1250]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:13:54.573896 ignition[1250]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Dec 13 14:13:54.573896 ignition[1250]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:13:54.586350 ignition[1250]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:13:54.586350 ignition[1250]: INFO : files: op(19): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:13:54.586350 ignition[1250]: INFO : files: op(19): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:13:54.586350 ignition[1250]: INFO : files: op(1a): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:13:54.586350 ignition[1250]: INFO : files: op(1a): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:13:54.586350 ignition[1250]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" Dec 13 14:13:54.586350 ignition[1250]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:13:54.606945 ignition[1250]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:13:54.610567 ignition[1250]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:13:54.614007 ignition[1250]: INFO : files: files passed Dec 13 14:13:54.614007 ignition[1250]: INFO : Ignition finished successfully Dec 13 14:13:54.619021 systemd[1]: Finished ignition-files.service. Dec 13 14:13:54.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.629661 kernel: audit: type=1130 audit(1734099234.619:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.633948 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:13:54.635894 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:13:54.646159 systemd[1]: Starting ignition-quench.service... Dec 13 14:13:54.656159 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:13:54.657092 systemd[1]: Finished ignition-quench.service. Dec 13 14:13:54.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.670673 kernel: audit: type=1130 audit(1734099234.659:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.673387 initrd-setup-root-after-ignition[1276]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:13:54.678289 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:13:54.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.680989 systemd[1]: Reached target ignition-complete.target. Dec 13 14:13:54.688564 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:13:54.721488 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:13:54.722026 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:13:54.727530 systemd[1]: Reached target initrd-fs.target. Dec 13 14:13:54.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.730819 systemd[1]: Reached target initrd.target. Dec 13 14:13:54.733823 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.737923 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:13:54.764722 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:13:54.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.769307 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:13:54.789572 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:13:54.792988 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:13:54.799033 systemd[1]: Stopped target timers.target. Dec 13 14:13:54.801176 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:13:54.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.801407 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:13:54.804226 systemd[1]: Stopped target initrd.target. Dec 13 14:13:54.806160 systemd[1]: Stopped target basic.target. Dec 13 14:13:54.809125 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:13:54.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.811409 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:13:54.814143 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:13:54.816112 systemd[1]: Stopped target remote-fs.target. Dec 13 14:13:54.817955 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:13:54.819932 systemd[1]: Stopped target sysinit.target. Dec 13 14:13:54.821752 systemd[1]: Stopped target local-fs.target. Dec 13 14:13:54.824256 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:13:54.826884 systemd[1]: Stopped target swap.target. Dec 13 14:13:54.829239 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:13:54.829612 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:13:54.832290 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:13:54.848797 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:13:54.850913 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:13:54.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.854323 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:13:54.856879 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:13:54.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.860838 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:13:54.861114 systemd[1]: Stopped ignition-files.service. Dec 13 14:13:54.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.868075 systemd[1]: Stopping ignition-mount.service... Dec 13 14:13:54.883883 ignition[1289]: INFO : Ignition 2.14.0 Dec 13 14:13:54.883883 ignition[1289]: INFO : Stage: umount Dec 13 14:13:54.883883 ignition[1289]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:54.883883 ignition[1289]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:54.896726 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:13:54.899298 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:13:54.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.905282 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:13:54.909865 ignition[1289]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:54.909865 ignition[1289]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:54.917953 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:13:54.918662 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:13:54.923906 ignition[1289]: INFO : PUT result: OK Dec 13 14:13:54.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.935018 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:13:54.938394 ignition[1289]: INFO : umount: umount passed Dec 13 14:13:54.938394 ignition[1289]: INFO : Ignition finished successfully Dec 13 14:13:54.943204 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:13:54.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.954477 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:13:54.958114 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:13:54.960235 systemd[1]: Stopped ignition-mount.service. Dec 13 14:13:54.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.964778 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:13:54.967049 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:13:54.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.972780 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:13:54.974978 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:13:54.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.980189 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:13:54.982371 systemd[1]: Stopped ignition-disks.service. Dec 13 14:13:54.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.985976 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:13:54.988178 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:13:54.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.991279 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:13:54.991387 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:13:54.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.995884 systemd[1]: Stopped target network.target. Dec 13 14:13:54.997585 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:13:55.001090 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:13:55.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.004504 systemd[1]: Stopped target paths.target. Dec 13 14:13:55.007319 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:13:55.009697 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:13:55.012953 systemd[1]: Stopped target slices.target. Dec 13 14:13:55.015607 systemd[1]: Stopped target sockets.target. Dec 13 14:13:55.018834 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:13:55.018936 systemd[1]: Closed iscsid.socket. Dec 13 14:13:55.023216 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:13:55.023320 systemd[1]: Closed iscsiuio.socket. Dec 13 14:13:55.026351 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:13:55.029548 systemd[1]: Stopped ignition-setup.service. Dec 13 14:13:55.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.032568 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:13:55.032682 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:13:55.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.038208 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:13:55.041420 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:13:55.045734 systemd-networkd[1095]: eth0: DHCPv6 lease lost Dec 13 14:13:55.049170 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:13:55.051568 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:13:55.053000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:13:55.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.055925 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:13:55.058132 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:13:55.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.062286 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:13:55.063000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:13:55.064542 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:13:55.069137 systemd[1]: Stopping network-cleanup.service... Dec 13 14:13:55.073518 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:13:55.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.073674 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:13:55.075453 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:13:55.075559 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:13:55.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.082828 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:13:55.082951 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:13:55.084932 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:13:55.092006 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:13:55.112692 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:13:55.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.112944 systemd[1]: Stopped network-cleanup.service. Dec 13 14:13:55.119424 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:13:55.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.120060 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:13:55.123403 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:13:55.123502 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:13:55.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.125441 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:13:55.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.125833 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:13:55.128979 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:13:55.129094 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:13:55.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:55.132345 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:13:55.132449 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:13:55.135002 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:13:55.135105 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:13:55.138288 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:13:55.148777 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:13:55.148898 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:13:55.154765 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:13:55.156002 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:13:55.159056 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:13:55.162744 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:13:55.192056 systemd[1]: Switching root. Dec 13 14:13:55.195000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:13:55.195000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:13:55.196000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:13:55.196000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:13:55.196000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:13:55.215117 iscsid[1100]: iscsid shutting down. Dec 13 14:13:55.218372 systemd-journald[309]: Received SIGTERM from PID 1 (systemd). Dec 13 14:13:55.218497 systemd-journald[309]: Journal stopped Dec 13 14:14:01.979102 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:14:01.980785 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:14:01.980839 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:14:01.980872 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:14:01.980906 kernel: SELinux: policy capability open_perms=1 Dec 13 14:14:01.980945 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:14:01.980977 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:14:01.981016 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:14:01.981048 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:14:01.981080 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:14:01.981111 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:14:01.981145 systemd[1]: Successfully loaded SELinux policy in 107.827ms. Dec 13 14:14:01.981204 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 33.474ms. Dec 13 14:14:01.981239 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:14:01.981276 systemd[1]: Detected virtualization amazon. Dec 13 14:14:01.981311 systemd[1]: Detected architecture arm64. Dec 13 14:14:01.981342 systemd[1]: Detected first boot. Dec 13 14:14:01.981377 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:14:01.981408 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:14:01.981441 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:14:01.981473 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:14:01.981513 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:14:01.981553 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:14:01.981587 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:14:01.981645 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:14:01.981684 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:14:01.981719 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:14:01.981752 systemd[1]: Created slice system-getty.slice. Dec 13 14:14:01.981782 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:14:01.981818 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:14:01.981851 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:14:01.981882 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:14:01.981911 systemd[1]: Created slice user.slice. Dec 13 14:14:01.981942 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:14:01.981972 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:14:01.982004 systemd[1]: Set up automount boot.automount. Dec 13 14:14:01.982035 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:14:01.982065 systemd[1]: Reached target integritysetup.target. Dec 13 14:14:01.982099 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:14:01.982128 systemd[1]: Reached target remote-fs.target. Dec 13 14:14:01.982158 systemd[1]: Reached target slices.target. Dec 13 14:14:01.982188 systemd[1]: Reached target swap.target. Dec 13 14:14:01.982217 systemd[1]: Reached target torcx.target. Dec 13 14:14:01.982273 systemd[1]: Reached target veritysetup.target. Dec 13 14:14:01.982307 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:14:01.982341 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:14:01.982375 kernel: kauditd_printk_skb: 54 callbacks suppressed Dec 13 14:14:01.982406 kernel: audit: type=1400 audit(1734099241.598:85): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:14:01.982437 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:14:01.982467 kernel: audit: type=1335 audit(1734099241.600:86): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:14:01.982498 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:14:01.982528 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:14:01.982557 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:14:01.982588 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:14:01.982638 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:14:01.982675 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:14:01.982710 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:14:01.982743 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:14:01.982776 systemd[1]: Mounting media.mount... Dec 13 14:14:01.982807 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:14:01.982837 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:14:01.982866 systemd[1]: Mounting tmp.mount... Dec 13 14:14:01.982899 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:14:01.982933 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:14:01.982967 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:14:01.982998 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:14:01.983033 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:14:01.983065 systemd[1]: Starting modprobe@drm.service... Dec 13 14:14:01.983094 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:14:01.983124 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:14:01.983153 systemd[1]: Starting modprobe@loop.service... Dec 13 14:14:01.983183 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:14:01.983218 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:14:01.983248 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:14:01.983279 systemd[1]: Starting systemd-journald.service... Dec 13 14:14:01.983308 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:14:01.983341 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:14:01.983374 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:14:01.983404 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:14:01.983435 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:14:01.983467 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:14:01.983502 systemd[1]: Mounted media.mount. Dec 13 14:14:01.983532 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:14:01.983562 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:14:01.983591 systemd[1]: Mounted tmp.mount. Dec 13 14:14:01.989797 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:14:01.989848 kernel: audit: type=1130 audit(1734099241.876:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.989882 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:14:01.989918 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:14:01.989957 kernel: audit: type=1130 audit(1734099241.896:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.989989 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:14:01.990020 kernel: audit: type=1131 audit(1734099241.896:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.990049 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:14:01.990079 kernel: audit: type=1130 audit(1734099241.921:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.990110 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:14:01.990141 kernel: audit: type=1131 audit(1734099241.921:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.990168 kernel: loop: module loaded Dec 13 14:14:01.990202 kernel: fuse: init (API version 7.34) Dec 13 14:14:01.990250 systemd[1]: Finished modprobe@drm.service. Dec 13 14:14:01.990287 kernel: audit: type=1130 audit(1734099241.946:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.990320 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:14:01.990351 kernel: audit: type=1131 audit(1734099241.946:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.990380 kernel: audit: type=1305 audit(1734099241.959:94): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:14:01.990409 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:14:01.990438 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:14:01.990472 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:14:01.990506 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:14:01.990540 systemd-journald[1442]: Journal started Dec 13 14:14:01.990676 systemd-journald[1442]: Runtime Journal (/run/log/journal/ec28010aef86db6f0cbbeafc796a3d32) is 8.0M, max 75.4M, 67.4M free. Dec 13 14:14:01.600000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:14:01.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.959000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:14:01.959000 audit[1442]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffe04f67b0 a2=4000 a3=1 items=0 ppid=1 pid=1442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:01.959000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:14:01.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.995977 systemd[1]: Finished modprobe@loop.service. Dec 13 14:14:02.007133 systemd[1]: Started systemd-journald.service. Dec 13 14:14:01.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:01.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.008077 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:14:02.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.010527 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:14:02.013328 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:14:02.016599 systemd[1]: Reached target network-pre.target. Dec 13 14:14:02.020942 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:14:02.031346 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:14:02.032912 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:14:02.038172 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:14:02.054510 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:14:02.056370 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:14:02.059534 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:14:02.061956 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:14:02.064587 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:14:02.076091 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:14:02.078673 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:14:02.098099 systemd-journald[1442]: Time spent on flushing to /var/log/journal/ec28010aef86db6f0cbbeafc796a3d32 is 71.222ms for 1062 entries. Dec 13 14:14:02.098099 systemd-journald[1442]: System Journal (/var/log/journal/ec28010aef86db6f0cbbeafc796a3d32) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:14:02.189955 systemd-journald[1442]: Received client request to flush runtime journal. Dec 13 14:14:02.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.113485 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:14:02.133924 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:14:02.136512 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:14:02.138814 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:14:02.191352 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:14:02.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.194307 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:14:02.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.239793 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:14:02.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.244088 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:14:02.261191 udevadm[1496]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:14:02.345991 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:14:02.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:02.350216 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:14:02.453164 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:14:02.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.084964 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:14:03.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.089534 systemd[1]: Starting systemd-udevd.service... Dec 13 14:14:03.132755 systemd-udevd[1502]: Using default interface naming scheme 'v252'. Dec 13 14:14:03.180828 systemd[1]: Started systemd-udevd.service. Dec 13 14:14:03.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.185812 systemd[1]: Starting systemd-networkd.service... Dec 13 14:14:03.195501 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:14:03.283761 systemd[1]: Found device dev-ttyS0.device. Dec 13 14:14:03.291144 (udev-worker)[1505]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:14:03.312391 systemd[1]: Started systemd-userdbd.service. Dec 13 14:14:03.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.514467 systemd-networkd[1507]: lo: Link UP Dec 13 14:14:03.515007 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1511) Dec 13 14:14:03.514492 systemd-networkd[1507]: lo: Gained carrier Dec 13 14:14:03.515456 systemd-networkd[1507]: Enumeration completed Dec 13 14:14:03.515759 systemd[1]: Started systemd-networkd.service. Dec 13 14:14:03.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.520042 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:14:03.523179 systemd-networkd[1507]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:14:03.536149 systemd-networkd[1507]: eth0: Link UP Dec 13 14:14:03.538918 systemd-networkd[1507]: eth0: Gained carrier Dec 13 14:14:03.539671 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:14:03.550879 systemd-networkd[1507]: eth0: DHCPv4 address 172.31.26.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:14:03.698828 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:14:03.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.725118 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 14:14:03.728026 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:14:03.797885 lvm[1623]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:14:03.839425 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:14:03.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.842338 systemd[1]: Reached target cryptsetup.target. Dec 13 14:14:03.847065 systemd[1]: Starting lvm2-activation.service... Dec 13 14:14:03.856978 lvm[1625]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:14:03.891452 systemd[1]: Finished lvm2-activation.service. Dec 13 14:14:03.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:03.893400 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:14:03.895148 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:14:03.895205 systemd[1]: Reached target local-fs.target. Dec 13 14:14:03.896807 systemd[1]: Reached target machines.target. Dec 13 14:14:03.900844 systemd[1]: Starting ldconfig.service... Dec 13 14:14:03.908557 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:14:03.908719 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:14:03.911154 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:14:03.915128 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:14:03.919827 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:14:03.924408 systemd[1]: Starting systemd-sysext.service... Dec 13 14:14:03.952242 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1628 (bootctl) Dec 13 14:14:03.955615 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:14:03.960981 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:14:03.973956 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:14:03.974594 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:14:04.010675 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 14:14:04.012019 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:14:04.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:04.112291 systemd-fsck[1641]: fsck.fat 4.2 (2021-01-31) Dec 13 14:14:04.112291 systemd-fsck[1641]: /dev/nvme0n1p1: 236 files, 117175/258078 clusters Dec 13 14:14:04.115157 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:14:04.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:04.120913 systemd[1]: Mounting boot.mount... Dec 13 14:14:04.151269 systemd[1]: Mounted boot.mount. Dec 13 14:14:04.178770 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:14:04.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:04.221655 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:14:04.251676 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 14:14:04.271463 (sd-sysext)[1660]: Using extensions 'kubernetes'. Dec 13 14:14:04.273170 (sd-sysext)[1660]: Merged extensions into '/usr'. Dec 13 14:14:04.326826 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:14:04.328834 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:14:04.333036 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:14:04.338149 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:14:04.343153 systemd[1]: Starting modprobe@loop.service... Dec 13 14:14:04.350258 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:14:04.350686 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:14:04.363275 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:14:04.366139 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:14:04.366797 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:14:04.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:04.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:04.370319 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:14:04.374941 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:14:04.375441 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:14:04.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:04.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:04.381325 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:14:04.381793 systemd[1]: Finished modprobe@loop.service. Dec 13 14:14:04.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:04.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:04.384453 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:14:04.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:04.386907 systemd[1]: Finished systemd-sysext.service. Dec 13 14:14:04.392365 systemd[1]: Starting ensure-sysext.service... Dec 13 14:14:04.402407 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:14:04.436885 systemd[1]: Reloading. Dec 13 14:14:04.456348 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:14:04.461778 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:14:04.467481 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:14:04.621953 /usr/lib/systemd/system-generators/torcx-generator[1694]: time="2024-12-13T14:14:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:14:04.626954 /usr/lib/systemd/system-generators/torcx-generator[1694]: time="2024-12-13T14:14:04Z" level=info msg="torcx already run" Dec 13 14:14:04.918233 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:14:04.918691 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:14:04.950844 systemd-networkd[1507]: eth0: Gained IPv6LL Dec 13 14:14:04.967489 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:14:05.109357 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:14:05.155505 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:14:05.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.158750 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:14:05.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.163309 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:14:05.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.172998 systemd[1]: Starting audit-rules.service... Dec 13 14:14:05.177955 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:14:05.183534 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:14:05.191491 systemd[1]: Starting systemd-resolved.service... Dec 13 14:14:05.201702 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:14:05.213186 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:14:05.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.225956 ldconfig[1627]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:14:05.219376 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:14:05.235137 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:14:05.242120 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:14:05.247929 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:14:05.252745 systemd[1]: Starting modprobe@loop.service... Dec 13 14:14:05.257095 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:14:05.257475 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:14:05.257850 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:14:05.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.271161 systemd[1]: Finished ldconfig.service. Dec 13 14:14:05.274575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:14:05.275104 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:14:05.278440 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:14:05.278874 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:14:05.281928 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:14:05.285233 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:14:05.292320 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:14:05.290000 audit[1772]: SYSTEM_BOOT pid=1772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.301605 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:14:05.308805 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:14:05.309235 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:14:05.309580 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:14:05.313541 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:14:05.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.316028 systemd[1]: Finished modprobe@loop.service. Dec 13 14:14:05.319027 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:14:05.319424 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:14:05.327751 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:14:05.336422 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:14:05.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.342544 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:14:05.345298 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:14:05.351287 systemd[1]: Starting modprobe@drm.service... Dec 13 14:14:05.359980 systemd[1]: Starting modprobe@loop.service... Dec 13 14:14:05.362723 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:14:05.363114 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:14:05.363471 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:14:05.369199 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:14:05.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.370259 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:14:05.373999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:14:05.374427 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:14:05.377972 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:14:05.380541 systemd[1]: Finished ensure-sysext.service. Dec 13 14:14:05.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.394981 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:14:05.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.395407 systemd[1]: Finished modprobe@loop.service. Dec 13 14:14:05.398088 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:14:05.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.400028 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:14:05.400465 systemd[1]: Finished modprobe@drm.service. Dec 13 14:14:05.449405 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:14:05.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:05.469338 systemd[1]: Starting systemd-update-done.service... Dec 13 14:14:05.476000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:14:05.476000 audit[1801]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe2a4c830 a2=420 a3=0 items=0 ppid=1760 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:05.476000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:14:05.478360 augenrules[1801]: No rules Dec 13 14:14:05.479781 systemd[1]: Finished audit-rules.service. Dec 13 14:14:05.497687 systemd[1]: Finished systemd-update-done.service. Dec 13 14:14:05.562391 systemd-resolved[1763]: Positive Trust Anchors: Dec 13 14:14:05.563108 systemd-resolved[1763]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:14:05.563312 systemd-resolved[1763]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:14:05.590052 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:14:05.592096 systemd[1]: Reached target time-set.target. Dec 13 14:14:05.609790 systemd-resolved[1763]: Defaulting to hostname 'linux'. Dec 13 14:14:05.613369 systemd[1]: Started systemd-resolved.service. Dec 13 14:14:05.615201 systemd[1]: Reached target network.target. Dec 13 14:14:05.616842 systemd[1]: Reached target network-online.target. Dec 13 14:14:05.618698 systemd[1]: Reached target nss-lookup.target. Dec 13 14:14:05.620898 systemd[1]: Reached target sysinit.target. Dec 13 14:14:05.622875 systemd[1]: Started motdgen.path. Dec 13 14:14:05.624519 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:14:05.627194 systemd[1]: Started logrotate.timer. Dec 13 14:14:05.629079 systemd[1]: Started mdadm.timer. Dec 13 14:14:05.630657 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:14:05.632539 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:14:05.632606 systemd[1]: Reached target paths.target. Dec 13 14:14:05.634423 systemd[1]: Reached target timers.target. Dec 13 14:14:05.636757 systemd[1]: Listening on dbus.socket. Dec 13 14:14:05.641400 systemd[1]: Starting docker.socket... Dec 13 14:14:05.647224 systemd[1]: Listening on sshd.socket. Dec 13 14:14:05.649219 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:14:05.649917 systemd[1]: Listening on docker.socket. Dec 13 14:14:05.658200 systemd[1]: Reached target sockets.target. Dec 13 14:14:05.659863 systemd[1]: Reached target basic.target. Dec 13 14:14:05.661825 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:14:05.661918 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:14:05.661975 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:14:05.664574 systemd[1]: Started amazon-ssm-agent.service. Dec 13 14:14:05.669775 systemd[1]: Starting containerd.service... Dec 13 14:14:05.674053 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:14:05.676252 systemd-timesyncd[1765]: Contacted time server 148.113.194.34:123 (0.flatcar.pool.ntp.org). Dec 13 14:14:05.676367 systemd-timesyncd[1765]: Initial clock synchronization to Fri 2024-12-13 14:14:06.033268 UTC. Dec 13 14:14:05.681858 systemd[1]: Starting dbus.service... Dec 13 14:14:05.692750 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:14:05.697843 systemd[1]: Starting extend-filesystems.service... Dec 13 14:14:05.707334 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:14:05.717507 jq[1818]: false Dec 13 14:14:05.714296 systemd[1]: Starting kubelet.service... Dec 13 14:14:05.721162 systemd[1]: Starting motdgen.service... Dec 13 14:14:05.727838 systemd[1]: Started nvidia.service. Dec 13 14:14:05.737117 systemd[1]: Starting prepare-helm.service... Dec 13 14:14:05.741954 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:14:05.747896 systemd[1]: Starting sshd-keygen.service... Dec 13 14:14:05.763524 systemd[1]: Starting systemd-logind.service... Dec 13 14:14:05.765188 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:14:05.765380 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:14:05.770600 systemd[1]: Starting update-engine.service... Dec 13 14:14:05.789136 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:14:05.831652 jq[1832]: true Dec 13 14:14:05.800009 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:14:05.800695 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:14:05.902848 jq[1841]: true Dec 13 14:14:05.936315 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:14:05.936948 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:14:05.968960 tar[1835]: linux-arm64/helm Dec 13 14:14:05.993539 dbus-daemon[1817]: [system] SELinux support is enabled Dec 13 14:14:05.998102 systemd[1]: Started dbus.service. Dec 13 14:14:06.003334 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:14:06.003418 systemd[1]: Reached target system-config.target. Dec 13 14:14:06.005464 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:14:06.005523 systemd[1]: Reached target user-config.target. Dec 13 14:14:06.049853 dbus-daemon[1817]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1507 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:14:06.059536 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:14:06.119590 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:14:06.120302 systemd[1]: Finished motdgen.service. Dec 13 14:14:06.133651 extend-filesystems[1819]: Found loop1 Dec 13 14:14:06.133651 extend-filesystems[1819]: Found nvme0n1 Dec 13 14:14:06.133651 extend-filesystems[1819]: Found nvme0n1p1 Dec 13 14:14:06.133651 extend-filesystems[1819]: Found nvme0n1p2 Dec 13 14:14:06.133651 extend-filesystems[1819]: Found nvme0n1p3 Dec 13 14:14:06.133651 extend-filesystems[1819]: Found usr Dec 13 14:14:06.133651 extend-filesystems[1819]: Found nvme0n1p4 Dec 13 14:14:06.133651 extend-filesystems[1819]: Found nvme0n1p6 Dec 13 14:14:06.133651 extend-filesystems[1819]: Found nvme0n1p7 Dec 13 14:14:06.133651 extend-filesystems[1819]: Found nvme0n1p9 Dec 13 14:14:06.133651 extend-filesystems[1819]: Checking size of /dev/nvme0n1p9 Dec 13 14:14:06.196248 amazon-ssm-agent[1813]: 2024/12/13 14:14:06 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 14:14:06.206051 amazon-ssm-agent[1813]: Initializing new seelog logger Dec 13 14:14:06.220015 amazon-ssm-agent[1813]: New Seelog Logger Creation Complete Dec 13 14:14:06.220306 amazon-ssm-agent[1813]: 2024/12/13 14:14:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:14:06.220426 amazon-ssm-agent[1813]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:14:06.221166 amazon-ssm-agent[1813]: 2024/12/13 14:14:06 processing appconfig overrides Dec 13 14:14:06.230480 bash[1887]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:14:06.232032 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:14:06.235384 extend-filesystems[1819]: Resized partition /dev/nvme0n1p9 Dec 13 14:14:06.250539 extend-filesystems[1894]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:14:06.277695 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 14:14:06.280904 update_engine[1830]: I1213 14:14:06.280427 1830 main.cc:92] Flatcar Update Engine starting Dec 13 14:14:06.287076 systemd[1]: Started update-engine.service. Dec 13 14:14:06.292205 systemd[1]: Started locksmithd.service. Dec 13 14:14:06.295122 update_engine[1830]: I1213 14:14:06.295079 1830 update_check_scheduler.cc:74] Next update check in 10m16s Dec 13 14:14:06.313550 env[1837]: time="2024-12-13T14:14:06.309707421Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:14:06.345480 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 14:14:06.378914 extend-filesystems[1894]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:14:06.378914 extend-filesystems[1894]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:14:06.378914 extend-filesystems[1894]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 14:14:06.393236 extend-filesystems[1819]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:14:06.383517 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:14:06.384246 systemd[1]: Finished extend-filesystems.service. Dec 13 14:14:06.469276 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:14:06.650412 env[1837]: time="2024-12-13T14:14:06.650338745Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:14:06.673045 env[1837]: time="2024-12-13T14:14:06.672966422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:14:06.693364 env[1837]: time="2024-12-13T14:14:06.693260986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:14:06.693618 env[1837]: time="2024-12-13T14:14:06.693577064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:14:06.694561 env[1837]: time="2024-12-13T14:14:06.694473224Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:14:06.694868 env[1837]: time="2024-12-13T14:14:06.694807228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:14:06.695101 env[1837]: time="2024-12-13T14:14:06.695043001Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:14:06.695255 env[1837]: time="2024-12-13T14:14:06.695220332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:14:06.695782 env[1837]: time="2024-12-13T14:14:06.695711648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:14:06.696830 env[1837]: time="2024-12-13T14:14:06.696763214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:14:06.697665 env[1837]: time="2024-12-13T14:14:06.697553835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:14:06.697883 env[1837]: time="2024-12-13T14:14:06.697845079Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:14:06.698258 env[1837]: time="2024-12-13T14:14:06.698195693Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:14:06.698483 env[1837]: time="2024-12-13T14:14:06.698442122Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:14:06.708055 systemd-logind[1829]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 14:14:06.708114 systemd-logind[1829]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 14:14:06.708892 env[1837]: time="2024-12-13T14:14:06.708833802Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:14:06.709111 env[1837]: time="2024-12-13T14:14:06.709074364Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:14:06.709293 env[1837]: time="2024-12-13T14:14:06.709254478Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:14:06.709490 env[1837]: time="2024-12-13T14:14:06.709454637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:14:06.709699 env[1837]: time="2024-12-13T14:14:06.709659535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:14:06.709864 env[1837]: time="2024-12-13T14:14:06.709824105Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:14:06.710155 env[1837]: time="2024-12-13T14:14:06.710112479Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:14:06.710911 env[1837]: time="2024-12-13T14:14:06.710854774Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:14:06.714418 env[1837]: time="2024-12-13T14:14:06.714312011Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:14:06.714696 env[1837]: time="2024-12-13T14:14:06.714653975Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:14:06.714878 env[1837]: time="2024-12-13T14:14:06.714843880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:14:06.715046 env[1837]: time="2024-12-13T14:14:06.715004501Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:14:06.715540 env[1837]: time="2024-12-13T14:14:06.715478342Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:14:06.720232 env[1837]: time="2024-12-13T14:14:06.720100418Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:14:06.721455 env[1837]: time="2024-12-13T14:14:06.721396269Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:14:06.723266 systemd-logind[1829]: New seat seat0. Dec 13 14:14:06.723640 env[1837]: time="2024-12-13T14:14:06.723572384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:14:06.723827 env[1837]: time="2024-12-13T14:14:06.723788376Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:14:06.724152 env[1837]: time="2024-12-13T14:14:06.724109644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:14:06.724328 env[1837]: time="2024-12-13T14:14:06.724293557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:14:06.724508 env[1837]: time="2024-12-13T14:14:06.724473032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:14:06.724761 env[1837]: time="2024-12-13T14:14:06.724708818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:14:06.727178 systemd[1]: Started systemd-logind.service. Dec 13 14:14:06.730167 env[1837]: time="2024-12-13T14:14:06.728331615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:14:06.732848 env[1837]: time="2024-12-13T14:14:06.732770794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:14:06.733103 env[1837]: time="2024-12-13T14:14:06.733049302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:14:06.733284 env[1837]: time="2024-12-13T14:14:06.733247819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:14:06.733528 env[1837]: time="2024-12-13T14:14:06.733462394Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:14:06.734353 env[1837]: time="2024-12-13T14:14:06.734276168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:14:06.734746 env[1837]: time="2024-12-13T14:14:06.734687769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:14:06.739843 env[1837]: time="2024-12-13T14:14:06.739752986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:14:06.740107 env[1837]: time="2024-12-13T14:14:06.740056352Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:14:06.740263 env[1837]: time="2024-12-13T14:14:06.740225849Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:14:06.740405 env[1837]: time="2024-12-13T14:14:06.740375476Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:14:06.740555 env[1837]: time="2024-12-13T14:14:06.740523599Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:14:06.740829 env[1837]: time="2024-12-13T14:14:06.740766794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:14:06.741740 env[1837]: time="2024-12-13T14:14:06.741508913Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:14:06.743191 env[1837]: time="2024-12-13T14:14:06.743115615Z" level=info msg="Connect containerd service" Dec 13 14:14:06.746268 env[1837]: time="2024-12-13T14:14:06.746194947Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:14:06.748287 env[1837]: time="2024-12-13T14:14:06.748203860Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:14:06.753929 env[1837]: time="2024-12-13T14:14:06.753851039Z" level=info msg="Start subscribing containerd event" Dec 13 14:14:06.754175 env[1837]: time="2024-12-13T14:14:06.754123304Z" level=info msg="Start recovering state" Dec 13 14:14:06.754459 env[1837]: time="2024-12-13T14:14:06.754426921Z" level=info msg="Start event monitor" Dec 13 14:14:06.754619 env[1837]: time="2024-12-13T14:14:06.754586552Z" level=info msg="Start snapshots syncer" Dec 13 14:14:06.754797 env[1837]: time="2024-12-13T14:14:06.754762655Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:14:06.755910 env[1837]: time="2024-12-13T14:14:06.755843204Z" level=info msg="Start streaming server" Dec 13 14:14:06.757073 env[1837]: time="2024-12-13T14:14:06.756998554Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:14:06.757802 env[1837]: time="2024-12-13T14:14:06.757721444Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:14:06.799464 systemd[1]: Started containerd.service. Dec 13 14:14:06.802392 env[1837]: time="2024-12-13T14:14:06.802343352Z" level=info msg="containerd successfully booted in 0.531876s" Dec 13 14:14:06.807924 dbus-daemon[1817]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:14:06.808176 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:14:06.811650 dbus-daemon[1817]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1876 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:14:06.817164 systemd[1]: Starting polkit.service... Dec 13 14:14:06.863841 polkitd[1939]: Started polkitd version 121 Dec 13 14:14:06.913260 polkitd[1939]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:14:06.913404 polkitd[1939]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:14:06.924443 polkitd[1939]: Finished loading, compiling and executing 2 rules Dec 13 14:14:06.925335 dbus-daemon[1817]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:14:06.925606 systemd[1]: Started polkit.service. Dec 13 14:14:06.932286 polkitd[1939]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:14:06.994893 systemd-hostnamed[1876]: Hostname set to (transient) Dec 13 14:14:06.995189 systemd-resolved[1763]: System hostname changed to 'ip-172-31-26-163'. Dec 13 14:14:07.130557 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO Create new startup processor Dec 13 14:14:07.134624 coreos-metadata[1815]: Dec 13 14:14:07.134 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:14:07.139298 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 14:14:07.139613 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO Initializing bookkeeping folders Dec 13 14:14:07.139806 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO removing the completed state files Dec 13 14:14:07.139952 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO Initializing bookkeeping folders for long running plugins Dec 13 14:14:07.140070 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 14:14:07.140185 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO Initializing healthcheck folders for long running plugins Dec 13 14:14:07.140300 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO Initializing locations for inventory plugin Dec 13 14:14:07.140444 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO Initializing default location for custom inventory Dec 13 14:14:07.140561 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO Initializing default location for file inventory Dec 13 14:14:07.140726 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO Initializing default location for role inventory Dec 13 14:14:07.140865 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO Init the cloudwatchlogs publisher Dec 13 14:14:07.140988 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [instanceID=i-07df37b816d3fefc9] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 14:14:07.141112 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [instanceID=i-07df37b816d3fefc9] Successfully loaded platform independent plugin aws:configureDocker Dec 13 14:14:07.141232 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [instanceID=i-07df37b816d3fefc9] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 14:14:07.141356 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [instanceID=i-07df37b816d3fefc9] Successfully loaded platform independent plugin aws:configurePackage Dec 13 14:14:07.141476 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [instanceID=i-07df37b816d3fefc9] Successfully loaded platform independent plugin aws:runDocument Dec 13 14:14:07.141599 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [instanceID=i-07df37b816d3fefc9] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 14:14:07.141796 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [instanceID=i-07df37b816d3fefc9] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 14:14:07.141866 coreos-metadata[1815]: Dec 13 14:14:07.141 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 14:14:07.141993 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [instanceID=i-07df37b816d3fefc9] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 14:14:07.142115 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [instanceID=i-07df37b816d3fefc9] Successfully loaded platform independent plugin aws:downloadContent Dec 13 14:14:07.142254 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [instanceID=i-07df37b816d3fefc9] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 14:14:07.142374 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 14:14:07.142504 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO OS: linux, Arch: arm64 Dec 13 14:14:07.143843 coreos-metadata[1815]: Dec 13 14:14:07.143 INFO Fetch successful Dec 13 14:14:07.143843 coreos-metadata[1815]: Dec 13 14:14:07.143 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:14:07.146464 coreos-metadata[1815]: Dec 13 14:14:07.146 INFO Fetch successful Dec 13 14:14:07.148557 amazon-ssm-agent[1813]: datastore file /var/lib/amazon/ssm/i-07df37b816d3fefc9/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 14:14:07.151828 unknown[1815]: wrote ssh authorized keys file for user: core Dec 13 14:14:07.161840 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 14:14:07.182867 update-ssh-keys[1989]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:14:07.183703 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:14:07.266376 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 14:14:07.361327 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 14:14:07.455847 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessagingDeliveryService] Starting message polling Dec 13 14:14:07.550696 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 14:14:07.645597 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [instanceID=i-07df37b816d3fefc9] Starting association polling Dec 13 14:14:07.740760 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 14:14:07.836191 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 14:14:07.931727 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 14:14:08.027473 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 14:14:08.044014 tar[1835]: linux-arm64/LICENSE Dec 13 14:14:08.044720 tar[1835]: linux-arm64/README.md Dec 13 14:14:08.060098 systemd[1]: Finished prepare-helm.service. Dec 13 14:14:08.124445 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 14:14:08.220520 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 14:14:08.304366 locksmithd[1897]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:14:08.316815 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 14:14:08.388223 systemd[1]: Started kubelet.service. Dec 13 14:14:08.413319 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 14:14:08.510012 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-07df37b816d3fefc9, requestId: 92add61b-f284-40a9-9314-5765d1456bfd Dec 13 14:14:08.606934 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [OfflineService] Starting document processing engine... Dec 13 14:14:08.704068 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [OfflineService] [EngineProcessor] Starting Dec 13 14:14:08.801308 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 14:14:08.898824 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [OfflineService] Starting message polling Dec 13 14:14:08.996602 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [OfflineService] Starting send replies to MDS Dec 13 14:14:09.094420 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 14:14:09.192513 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 14:14:09.242267 kubelet[2053]: E1213 14:14:09.242140 2053 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:09.246136 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:09.246524 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:09.291161 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:14:09.389607 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 14:14:09.488268 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessageGatewayService] listening reply. Dec 13 14:14:09.590033 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [StartupProcessor] Executing startup processor tasks Dec 13 14:14:09.689021 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 14:14:09.791122 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 14:14:09.890667 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 14:14:09.991604 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-07df37b816d3fefc9?role=subscribe&stream=input Dec 13 14:14:10.091442 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-07df37b816d3fefc9?role=subscribe&stream=input Dec 13 14:14:10.192997 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 14:14:10.293125 amazon-ssm-agent[1813]: 2024-12-13 14:14:07 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 14:14:11.323657 sshd_keygen[1849]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:14:11.361024 systemd[1]: Finished sshd-keygen.service. Dec 13 14:14:11.368313 systemd[1]: Starting issuegen.service... Dec 13 14:14:11.380050 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:14:11.380674 systemd[1]: Finished issuegen.service. Dec 13 14:14:11.387801 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:14:11.405370 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:14:11.412360 systemd[1]: Started getty@tty1.service. Dec 13 14:14:11.418025 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:14:11.420210 systemd[1]: Reached target getty.target. Dec 13 14:14:11.421956 systemd[1]: Reached target multi-user.target. Dec 13 14:14:11.426775 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:14:11.443833 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:14:11.444567 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:14:11.452556 systemd[1]: Startup finished in 10.008s (kernel) + 15.020s (userspace) = 25.028s. Dec 13 14:14:14.125021 systemd[1]: Created slice system-sshd.slice. Dec 13 14:14:14.127356 systemd[1]: Started sshd@0-172.31.26.163:22-139.178.89.65:39058.service. Dec 13 14:14:14.329371 sshd[2079]: Accepted publickey for core from 139.178.89.65 port 39058 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:14.333977 sshd[2079]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:14.353393 systemd[1]: Created slice user-500.slice. Dec 13 14:14:14.355494 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:14:14.360797 systemd-logind[1829]: New session 1 of user core. Dec 13 14:14:14.378000 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:14:14.383816 systemd[1]: Starting user@500.service... Dec 13 14:14:14.390501 (systemd)[2084]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:14.569307 systemd[2084]: Queued start job for default target default.target. Dec 13 14:14:14.569749 systemd[2084]: Reached target paths.target. Dec 13 14:14:14.569787 systemd[2084]: Reached target sockets.target. Dec 13 14:14:14.569819 systemd[2084]: Reached target timers.target. Dec 13 14:14:14.569849 systemd[2084]: Reached target basic.target. Dec 13 14:14:14.569957 systemd[2084]: Reached target default.target. Dec 13 14:14:14.570021 systemd[2084]: Startup finished in 167ms. Dec 13 14:14:14.571432 systemd[1]: Started user@500.service. Dec 13 14:14:14.573359 systemd[1]: Started session-1.scope. Dec 13 14:14:14.724430 systemd[1]: Started sshd@1-172.31.26.163:22-139.178.89.65:39074.service. Dec 13 14:14:14.894953 sshd[2093]: Accepted publickey for core from 139.178.89.65 port 39074 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:14.898078 sshd[2093]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:14.907336 systemd[1]: Started session-2.scope. Dec 13 14:14:14.907803 systemd-logind[1829]: New session 2 of user core. Dec 13 14:14:15.038983 sshd[2093]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:15.044400 systemd[1]: sshd@1-172.31.26.163:22-139.178.89.65:39074.service: Deactivated successfully. Dec 13 14:14:15.046746 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:14:15.047999 systemd-logind[1829]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:14:15.050161 systemd-logind[1829]: Removed session 2. Dec 13 14:14:15.064363 systemd[1]: Started sshd@2-172.31.26.163:22-139.178.89.65:39086.service. Dec 13 14:14:15.233263 sshd[2100]: Accepted publickey for core from 139.178.89.65 port 39086 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:15.236314 sshd[2100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:15.244010 systemd-logind[1829]: New session 3 of user core. Dec 13 14:14:15.245010 systemd[1]: Started session-3.scope. Dec 13 14:14:15.368762 sshd[2100]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:15.373965 systemd[1]: sshd@2-172.31.26.163:22-139.178.89.65:39086.service: Deactivated successfully. Dec 13 14:14:15.375333 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:14:15.377873 systemd-logind[1829]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:14:15.379843 systemd-logind[1829]: Removed session 3. Dec 13 14:14:15.394478 systemd[1]: Started sshd@3-172.31.26.163:22-139.178.89.65:39098.service. Dec 13 14:14:15.566744 sshd[2107]: Accepted publickey for core from 139.178.89.65 port 39098 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:15.569722 sshd[2107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:15.578245 systemd[1]: Started session-4.scope. Dec 13 14:14:15.579159 systemd-logind[1829]: New session 4 of user core. Dec 13 14:14:15.713676 sshd[2107]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:15.719155 systemd-logind[1829]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:14:15.719477 systemd[1]: sshd@3-172.31.26.163:22-139.178.89.65:39098.service: Deactivated successfully. Dec 13 14:14:15.721023 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:14:15.721956 systemd-logind[1829]: Removed session 4. Dec 13 14:14:15.739256 systemd[1]: Started sshd@4-172.31.26.163:22-139.178.89.65:39110.service. Dec 13 14:14:15.910453 sshd[2114]: Accepted publickey for core from 139.178.89.65 port 39110 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:15.912966 sshd[2114]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:15.921214 systemd-logind[1829]: New session 5 of user core. Dec 13 14:14:15.922077 systemd[1]: Started session-5.scope. Dec 13 14:14:16.050883 sudo[2118]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 14:14:16.052064 sudo[2118]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:14:16.088310 dbus-daemon[1817]: avc: received setenforce notice (enforcing=1) Dec 13 14:14:16.088839 sudo[2118]: pam_unix(sudo:session): session closed for user root Dec 13 14:14:16.114103 sshd[2114]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:16.120279 systemd[1]: sshd@4-172.31.26.163:22-139.178.89.65:39110.service: Deactivated successfully. Dec 13 14:14:16.121743 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:14:16.122807 systemd-logind[1829]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:14:16.125018 systemd-logind[1829]: Removed session 5. Dec 13 14:14:16.139255 systemd[1]: Started sshd@5-172.31.26.163:22-139.178.89.65:39112.service. Dec 13 14:14:16.311667 sshd[2122]: Accepted publickey for core from 139.178.89.65 port 39112 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:16.314810 sshd[2122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:16.323306 systemd[1]: Started session-6.scope. Dec 13 14:14:16.324157 systemd-logind[1829]: New session 6 of user core. Dec 13 14:14:16.434027 sudo[2127]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 14:14:16.434578 sudo[2127]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:14:16.439937 sudo[2127]: pam_unix(sudo:session): session closed for user root Dec 13 14:14:16.449323 sudo[2126]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 14:14:16.450404 sudo[2126]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:14:16.468258 systemd[1]: Stopping audit-rules.service... Dec 13 14:14:16.469000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 14:14:16.472438 kernel: kauditd_printk_skb: 64 callbacks suppressed Dec 13 14:14:16.472497 kernel: audit: type=1305 audit(1734099256.469:155): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 14:14:16.473280 auditctl[2130]: No rules Dec 13 14:14:16.474319 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 14:14:16.474853 systemd[1]: Stopped audit-rules.service. Dec 13 14:14:16.469000 audit[2130]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe851d2e0 a2=420 a3=0 items=0 ppid=1 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:16.480353 systemd[1]: Starting audit-rules.service... Dec 13 14:14:16.487773 kernel: audit: type=1300 audit(1734099256.469:155): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe851d2e0 a2=420 a3=0 items=0 ppid=1 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:16.469000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Dec 13 14:14:16.493457 kernel: audit: type=1327 audit(1734099256.469:155): proctitle=2F7362696E2F617564697463746C002D44 Dec 13 14:14:16.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:16.501465 kernel: audit: type=1131 audit(1734099256.474:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:16.522381 augenrules[2148]: No rules Dec 13 14:14:16.524237 systemd[1]: Finished audit-rules.service. Dec 13 14:14:16.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:16.533840 sudo[2126]: pam_unix(sudo:session): session closed for user root Dec 13 14:14:16.533000 audit[2126]: USER_END pid=2126 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:14:16.542801 kernel: audit: type=1130 audit(1734099256.522:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:16.542936 kernel: audit: type=1106 audit(1734099256.533:158): pid=2126 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:14:16.533000 audit[2126]: CRED_DISP pid=2126 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:14:16.550558 kernel: audit: type=1104 audit(1734099256.533:159): pid=2126 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:14:16.566915 sshd[2122]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:16.568000 audit[2122]: USER_END pid=2122 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:14:16.572425 systemd[1]: sshd@5-172.31.26.163:22-139.178.89.65:39112.service: Deactivated successfully. Dec 13 14:14:16.573772 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:14:16.581983 systemd-logind[1829]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:14:16.568000 audit[2122]: CRED_DISP pid=2122 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:14:16.583731 kernel: audit: type=1106 audit(1734099256.568:160): pid=2122 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:14:16.590976 systemd[1]: Started sshd@6-172.31.26.163:22-139.178.89.65:39118.service. Dec 13 14:14:16.603405 kernel: audit: type=1104 audit(1734099256.568:161): pid=2122 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:14:16.603535 kernel: audit: type=1131 audit(1734099256.569:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.26.163:22-139.178.89.65:39112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:16.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.26.163:22-139.178.89.65:39112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:16.595751 systemd-logind[1829]: Removed session 6. Dec 13 14:14:16.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.26.163:22-139.178.89.65:39118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:16.762000 audit[2155]: USER_ACCT pid=2155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:14:16.764184 sshd[2155]: Accepted publickey for core from 139.178.89.65 port 39118 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:16.764000 audit[2155]: CRED_ACQ pid=2155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:14:16.764000 audit[2155]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc3f4610 a2=3 a3=1 items=0 ppid=1 pid=2155 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:16.764000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:14:16.767301 sshd[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:16.775423 systemd-logind[1829]: New session 7 of user core. Dec 13 14:14:16.776362 systemd[1]: Started session-7.scope. Dec 13 14:14:16.785000 audit[2155]: USER_START pid=2155 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:14:16.789000 audit[2158]: CRED_ACQ pid=2158 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:14:16.888000 audit[2159]: USER_ACCT pid=2159 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:14:16.888947 sudo[2159]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:14:16.888000 audit[2159]: CRED_REFR pid=2159 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:14:16.889591 sudo[2159]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:14:16.892000 audit[2159]: USER_START pid=2159 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:14:16.942497 systemd[1]: Starting docker.service... Dec 13 14:14:17.017587 env[2169]: time="2024-12-13T14:14:17.017508989Z" level=info msg="Starting up" Dec 13 14:14:17.020688 env[2169]: time="2024-12-13T14:14:17.020610284Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:14:17.020688 env[2169]: time="2024-12-13T14:14:17.020674174Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:14:17.020908 env[2169]: time="2024-12-13T14:14:17.020724716Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:14:17.020908 env[2169]: time="2024-12-13T14:14:17.020749302Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:14:17.024016 env[2169]: time="2024-12-13T14:14:17.023952870Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:14:17.024016 env[2169]: time="2024-12-13T14:14:17.023997799Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:14:17.024252 env[2169]: time="2024-12-13T14:14:17.024032933Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:14:17.024252 env[2169]: time="2024-12-13T14:14:17.024054561Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:14:17.641542 env[2169]: time="2024-12-13T14:14:17.641495489Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 14:14:17.641868 env[2169]: time="2024-12-13T14:14:17.641837403Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 14:14:17.642259 env[2169]: time="2024-12-13T14:14:17.642229569Z" level=info msg="Loading containers: start." Dec 13 14:14:17.763000 audit[2200]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2200 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.763000 audit[2200]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffdca74980 a2=0 a3=1 items=0 ppid=2169 pid=2200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.763000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 14:14:17.767000 audit[2202]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2202 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.767000 audit[2202]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffffb59b590 a2=0 a3=1 items=0 ppid=2169 pid=2202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.767000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 14:14:17.771000 audit[2204]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.771000 audit[2204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd2a2dbd0 a2=0 a3=1 items=0 ppid=2169 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.771000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 14:14:17.775000 audit[2206]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2206 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.775000 audit[2206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff2116980 a2=0 a3=1 items=0 ppid=2169 pid=2206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.775000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 14:14:17.780000 audit[2208]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2208 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.780000 audit[2208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff6bb8740 a2=0 a3=1 items=0 ppid=2169 pid=2208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.780000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Dec 13 14:14:17.809000 audit[2213]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.809000 audit[2213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdefacbd0 a2=0 a3=1 items=0 ppid=2169 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.809000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Dec 13 14:14:17.821000 audit[2215]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.821000 audit[2215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd49fead0 a2=0 a3=1 items=0 ppid=2169 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.821000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 14:14:17.826000 audit[2217]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.826000 audit[2217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc11aa280 a2=0 a3=1 items=0 ppid=2169 pid=2217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.826000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 14:14:17.830000 audit[2219]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.830000 audit[2219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=fffff5742970 a2=0 a3=1 items=0 ppid=2169 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.830000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:14:17.851000 audit[2223]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.851000 audit[2223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcbc58650 a2=0 a3=1 items=0 ppid=2169 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.851000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:14:17.859000 audit[2224]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.859000 audit[2224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffef45c240 a2=0 a3=1 items=0 ppid=2169 pid=2224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.859000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:14:17.876670 kernel: Initializing XFRM netlink socket Dec 13 14:14:17.920404 env[2169]: time="2024-12-13T14:14:17.918962492Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:14:17.922055 (udev-worker)[2180]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:14:17.955000 audit[2232]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2232 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.955000 audit[2232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffcbf17eb0 a2=0 a3=1 items=0 ppid=2169 pid=2232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.955000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 14:14:17.972000 audit[2235]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2235 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.972000 audit[2235]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd7164a90 a2=0 a3=1 items=0 ppid=2169 pid=2235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.972000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 14:14:17.980000 audit[2238]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.980000 audit[2238]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff7188890 a2=0 a3=1 items=0 ppid=2169 pid=2238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.980000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Dec 13 14:14:17.984000 audit[2240]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2240 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.984000 audit[2240]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffcda41910 a2=0 a3=1 items=0 ppid=2169 pid=2240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.984000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Dec 13 14:14:17.988000 audit[2242]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2242 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.988000 audit[2242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffc3099980 a2=0 a3=1 items=0 ppid=2169 pid=2242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.988000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 14:14:17.992000 audit[2244]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2244 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.992000 audit[2244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=fffff4edfbd0 a2=0 a3=1 items=0 ppid=2169 pid=2244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.992000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 14:14:17.996000 audit[2246]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2246 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:17.996000 audit[2246]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffdbc36650 a2=0 a3=1 items=0 ppid=2169 pid=2246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:17.996000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Dec 13 14:14:18.026000 audit[2249]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2249 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:18.026000 audit[2249]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffe75fe8d0 a2=0 a3=1 items=0 ppid=2169 pid=2249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:18.026000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 14:14:18.031000 audit[2251]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:18.031000 audit[2251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffc1b51cb0 a2=0 a3=1 items=0 ppid=2169 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:18.031000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 14:14:18.035000 audit[2253]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:18.035000 audit[2253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd94095b0 a2=0 a3=1 items=0 ppid=2169 pid=2253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:18.035000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 14:14:18.039000 audit[2255]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2255 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:18.039000 audit[2255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff983ba00 a2=0 a3=1 items=0 ppid=2169 pid=2255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:18.039000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 14:14:18.042288 systemd-networkd[1507]: docker0: Link UP Dec 13 14:14:18.057000 audit[2259]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2259 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:18.057000 audit[2259]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe2c663f0 a2=0 a3=1 items=0 ppid=2169 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:18.057000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:14:18.062000 audit[2260]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2260 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:18.062000 audit[2260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffce32a320 a2=0 a3=1 items=0 ppid=2169 pid=2260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:18.062000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:14:18.064223 env[2169]: time="2024-12-13T14:14:18.064183459Z" level=info msg="Loading containers: done." Dec 13 14:14:18.091097 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2811001751-merged.mount: Deactivated successfully. Dec 13 14:14:18.101868 env[2169]: time="2024-12-13T14:14:18.101794958Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:14:18.102401 env[2169]: time="2024-12-13T14:14:18.102371549Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:14:18.102704 env[2169]: time="2024-12-13T14:14:18.102678279Z" level=info msg="Daemon has completed initialization" Dec 13 14:14:18.132296 systemd[1]: Started docker.service. Dec 13 14:14:18.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:18.141590 env[2169]: time="2024-12-13T14:14:18.141491658Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:14:19.498088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:14:19.498433 systemd[1]: Stopped kubelet.service. Dec 13 14:14:19.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:19.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:19.501414 systemd[1]: Starting kubelet.service... Dec 13 14:14:19.529251 env[1837]: time="2024-12-13T14:14:19.529183748Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:14:19.942771 systemd[1]: Started kubelet.service. Dec 13 14:14:19.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:20.042709 kubelet[2304]: E1213 14:14:20.042613 2304 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:20.050269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:20.050685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:20.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:14:20.234992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3689526988.mount: Deactivated successfully. Dec 13 14:14:22.457969 env[1837]: time="2024-12-13T14:14:22.457909745Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:22.461045 env[1837]: time="2024-12-13T14:14:22.460994719Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:22.464257 env[1837]: time="2024-12-13T14:14:22.464190192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:22.467685 env[1837]: time="2024-12-13T14:14:22.467602672Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:22.469530 env[1837]: time="2024-12-13T14:14:22.469465263Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 14:14:22.486148 env[1837]: time="2024-12-13T14:14:22.486063804Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:14:24.817117 env[1837]: time="2024-12-13T14:14:24.817032677Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:24.820310 env[1837]: time="2024-12-13T14:14:24.820244730Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:24.823769 env[1837]: time="2024-12-13T14:14:24.823707931Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:24.827312 env[1837]: time="2024-12-13T14:14:24.827256713Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:24.828898 env[1837]: time="2024-12-13T14:14:24.828850059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 14:14:24.848298 env[1837]: time="2024-12-13T14:14:24.848244218Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:14:26.286414 env[1837]: time="2024-12-13T14:14:26.286347872Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:26.289657 env[1837]: time="2024-12-13T14:14:26.289582200Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:26.293086 env[1837]: time="2024-12-13T14:14:26.293025227Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:26.296555 env[1837]: time="2024-12-13T14:14:26.296494374Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:26.298464 env[1837]: time="2024-12-13T14:14:26.298418490Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 14:14:26.315899 env[1837]: time="2024-12-13T14:14:26.315849449Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:14:27.697994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount977610641.mount: Deactivated successfully. Dec 13 14:14:28.528493 env[1837]: time="2024-12-13T14:14:28.528425221Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:28.531329 env[1837]: time="2024-12-13T14:14:28.531267867Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:28.534371 env[1837]: time="2024-12-13T14:14:28.534321683Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:28.536717 env[1837]: time="2024-12-13T14:14:28.536674869Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:28.537550 env[1837]: time="2024-12-13T14:14:28.537505045Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 14:14:28.555490 env[1837]: time="2024-12-13T14:14:28.555409918Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:14:29.124660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1595461676.mount: Deactivated successfully. Dec 13 14:14:30.100750 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:14:30.111055 kernel: kauditd_printk_skb: 88 callbacks suppressed Dec 13 14:14:30.111194 kernel: audit: type=1130 audit(1734099270.099:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:30.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:30.101077 systemd[1]: Stopped kubelet.service. Dec 13 14:14:30.103800 systemd[1]: Starting kubelet.service... Dec 13 14:14:30.120405 kernel: audit: type=1131 audit(1734099270.099:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:30.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:31.475579 systemd[1]: Started kubelet.service. Dec 13 14:14:31.485689 kernel: audit: type=1130 audit(1734099271.475:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:31.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:31.593004 kubelet[2341]: E1213 14:14:31.592935 2341 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:31.597036 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:31.597426 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:31.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:14:31.607671 kernel: audit: type=1131 audit(1734099271.597:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:14:32.129942 env[1837]: time="2024-12-13T14:14:32.129880617Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:32.133582 env[1837]: time="2024-12-13T14:14:32.133524123Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:32.136875 env[1837]: time="2024-12-13T14:14:32.136826873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:32.141789 env[1837]: time="2024-12-13T14:14:32.141720542Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:32.142328 env[1837]: time="2024-12-13T14:14:32.142275327Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:14:32.158345 env[1837]: time="2024-12-13T14:14:32.158268444Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:14:32.665461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2246697518.mount: Deactivated successfully. Dec 13 14:14:32.673831 env[1837]: time="2024-12-13T14:14:32.673775098Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:32.676752 env[1837]: time="2024-12-13T14:14:32.676707940Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:32.679324 env[1837]: time="2024-12-13T14:14:32.679264553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:32.681973 env[1837]: time="2024-12-13T14:14:32.681914655Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:32.683175 env[1837]: time="2024-12-13T14:14:32.683126666Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 14:14:32.700512 env[1837]: time="2024-12-13T14:14:32.700448608Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:14:33.269489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2956276218.mount: Deactivated successfully. Dec 13 14:14:34.424166 amazon-ssm-agent[1813]: 2024-12-13 14:14:34 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 14:14:36.088398 env[1837]: time="2024-12-13T14:14:36.088316106Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.091323 env[1837]: time="2024-12-13T14:14:36.091274168Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.094674 env[1837]: time="2024-12-13T14:14:36.094598805Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.098192 env[1837]: time="2024-12-13T14:14:36.098128424Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.100209 env[1837]: time="2024-12-13T14:14:36.100145691Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 14:14:37.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:37.028111 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:14:37.038784 kernel: audit: type=1131 audit(1734099277.028:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:41.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:41.600749 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:14:41.601079 systemd[1]: Stopped kubelet.service. Dec 13 14:14:41.612400 systemd[1]: Starting kubelet.service... Dec 13 14:14:41.629033 kernel: audit: type=1130 audit(1734099281.599:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:41.629164 kernel: audit: type=1131 audit(1734099281.599:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:41.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:41.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:41.998238 systemd[1]: Started kubelet.service. Dec 13 14:14:42.006662 kernel: audit: type=1130 audit(1734099281.998:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:42.105260 kubelet[2429]: E1213 14:14:42.105167 2429 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:42.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:14:42.109887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:42.110273 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:42.121734 kernel: audit: type=1131 audit(1734099282.110:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:14:44.987709 systemd[1]: Stopped kubelet.service. Dec 13 14:14:44.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:44.994166 systemd[1]: Starting kubelet.service... Dec 13 14:14:44.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:45.003654 kernel: audit: type=1130 audit(1734099284.988:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:45.003762 kernel: audit: type=1131 audit(1734099284.988:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:45.043783 systemd[1]: Reloading. Dec 13 14:14:45.185857 /usr/lib/systemd/system-generators/torcx-generator[2462]: time="2024-12-13T14:14:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:14:45.188082 /usr/lib/systemd/system-generators/torcx-generator[2462]: time="2024-12-13T14:14:45Z" level=info msg="torcx already run" Dec 13 14:14:45.387862 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:14:45.387899 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:14:45.428887 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:14:45.651466 systemd[1]: Started kubelet.service. Dec 13 14:14:45.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:45.663463 systemd[1]: Stopping kubelet.service... Dec 13 14:14:45.665592 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:14:45.666363 systemd[1]: Stopped kubelet.service. Dec 13 14:14:45.666676 kernel: audit: type=1130 audit(1734099285.650:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:45.666775 kernel: audit: type=1131 audit(1734099285.664:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:45.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:45.670412 systemd[1]: Starting kubelet.service... Dec 13 14:14:46.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:46.073941 systemd[1]: Started kubelet.service. Dec 13 14:14:46.087126 kernel: audit: type=1130 audit(1734099286.073:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:46.180283 kubelet[2539]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:46.180923 kubelet[2539]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:14:46.181024 kubelet[2539]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:46.181263 kubelet[2539]: I1213 14:14:46.181206 2539 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:14:47.829809 kubelet[2539]: I1213 14:14:47.829750 2539 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:14:47.829809 kubelet[2539]: I1213 14:14:47.829800 2539 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:14:47.830447 kubelet[2539]: I1213 14:14:47.830129 2539 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:14:47.885911 kubelet[2539]: E1213 14:14:47.885873 2539 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.26.163:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:47.886136 kubelet[2539]: I1213 14:14:47.886049 2539 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:14:47.899945 kubelet[2539]: I1213 14:14:47.899907 2539 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:14:47.900868 kubelet[2539]: I1213 14:14:47.900843 2539 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:14:47.901296 kubelet[2539]: I1213 14:14:47.901269 2539 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:14:47.901517 kubelet[2539]: I1213 14:14:47.901495 2539 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:14:47.901687 kubelet[2539]: I1213 14:14:47.901666 2539 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:14:47.904321 kubelet[2539]: I1213 14:14:47.904291 2539 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:47.909434 kubelet[2539]: I1213 14:14:47.909402 2539 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:14:47.909602 kubelet[2539]: I1213 14:14:47.909579 2539 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:14:47.910229 kubelet[2539]: W1213 14:14:47.910148 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.26.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-163&limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:47.910384 kubelet[2539]: E1213 14:14:47.910248 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.26.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-163&limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:47.911102 kubelet[2539]: I1213 14:14:47.911073 2539 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:14:47.911233 kubelet[2539]: I1213 14:14:47.911211 2539 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:14:47.912821 kubelet[2539]: I1213 14:14:47.912787 2539 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:14:47.913499 kubelet[2539]: I1213 14:14:47.913472 2539 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:14:47.914849 kubelet[2539]: W1213 14:14:47.914814 2539 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:14:47.916131 kubelet[2539]: I1213 14:14:47.916097 2539 server.go:1256] "Started kubelet" Dec 13 14:14:47.916496 kubelet[2539]: W1213 14:14:47.916445 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.26.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:47.916707 kubelet[2539]: E1213 14:14:47.916681 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.26.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:47.932164 kubelet[2539]: E1213 14:14:47.932120 2539 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.163:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.163:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-163.1810c21d5d7de8bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-163,UID:ip-172-31-26-163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-163,},FirstTimestamp:2024-12-13 14:14:47.916013756 +0000 UTC m=+1.818882236,LastTimestamp:2024-12-13 14:14:47.916013756 +0000 UTC m=+1.818882236,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-163,}" Dec 13 14:14:47.932517 kubelet[2539]: I1213 14:14:47.932488 2539 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:14:47.934469 kubelet[2539]: I1213 14:14:47.934431 2539 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:14:47.937469 kubelet[2539]: I1213 14:14:47.937428 2539 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:14:47.939224 kubelet[2539]: I1213 14:14:47.939175 2539 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:14:47.940000 audit[2539]: AVC avc: denied { mac_admin } for pid=2539 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:14:47.942858 kubelet[2539]: I1213 14:14:47.941538 2539 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 14:14:47.942858 kubelet[2539]: I1213 14:14:47.941611 2539 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 14:14:47.940000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:14:47.950540 kubelet[2539]: I1213 14:14:47.950505 2539 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:14:47.953597 kernel: audit: type=1400 audit(1734099287.940:215): avc: denied { mac_admin } for pid=2539 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:14:47.953762 kernel: audit: type=1401 audit(1734099287.940:215): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:14:47.954962 kubelet[2539]: I1213 14:14:47.954920 2539 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:14:47.940000 audit[2539]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000da0060 a1=4000d9e2a0 a2=4000da0030 a3=25 items=0 ppid=1 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:47.955895 kubelet[2539]: I1213 14:14:47.955864 2539 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:14:47.956155 kubelet[2539]: I1213 14:14:47.956135 2539 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:14:47.962047 kubelet[2539]: W1213 14:14:47.961981 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.26.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:47.962242 kubelet[2539]: E1213 14:14:47.962219 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.26.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:47.962510 kubelet[2539]: E1213 14:14:47.962488 2539 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-163?timeout=10s\": dial tcp 172.31.26.163:6443: connect: connection refused" interval="200ms" Dec 13 14:14:47.965314 kernel: audit: type=1300 audit(1734099287.940:215): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000da0060 a1=4000d9e2a0 a2=4000da0030 a3=25 items=0 ppid=1 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:47.940000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:14:47.975035 kernel: audit: type=1327 audit(1734099287.940:215): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:14:47.975675 kubelet[2539]: I1213 14:14:47.975608 2539 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:14:47.975987 kubelet[2539]: I1213 14:14:47.975956 2539 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:14:47.940000 audit[2539]: AVC avc: denied { mac_admin } for pid=2539 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:14:47.979685 kubelet[2539]: I1213 14:14:47.979649 2539 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:14:47.985569 kernel: audit: type=1400 audit(1734099287.940:216): avc: denied { mac_admin } for pid=2539 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:14:47.940000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:14:47.991395 kernel: audit: type=1401 audit(1734099287.940:216): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:14:47.991501 kernel: audit: type=1300 audit(1734099287.940:216): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c33e40 a1=4000d9e2b8 a2=4000da00f0 a3=25 items=0 ppid=1 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:47.940000 audit[2539]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c33e40 a1=4000d9e2b8 a2=4000da00f0 a3=25 items=0 ppid=1 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:47.940000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:14:48.013704 kernel: audit: type=1327 audit(1734099287.940:216): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:14:47.976000 audit[2549]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:48.020826 kernel: audit: type=1325 audit(1734099287.976:217): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:48.020939 kernel: audit: type=1300 audit(1734099287.976:217): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff563b9f0 a2=0 a3=1 items=0 ppid=2539 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:47.976000 audit[2549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff563b9f0 a2=0 a3=1 items=0 ppid=2539 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:48.024980 kubelet[2539]: E1213 14:14:48.024943 2539 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:14:47.976000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 14:14:47.986000 audit[2550]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:47.986000 audit[2550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5a01e40 a2=0 a3=1 items=0 ppid=2539 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:47.986000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 14:14:48.002000 audit[2552]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:48.002000 audit[2552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcf807a40 a2=0 a3=1 items=0 ppid=2539 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:48.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:14:48.014000 audit[2555]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:48.014000 audit[2555]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe61f3470 a2=0 a3=1 items=0 ppid=2539 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:48.014000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:14:48.049767 kubelet[2539]: I1213 14:14:48.049707 2539 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:14:48.049767 kubelet[2539]: I1213 14:14:48.049758 2539 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:14:48.049975 kubelet[2539]: I1213 14:14:48.049790 2539 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:48.052000 audit[2562]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:48.052000 audit[2562]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc19f7d60 a2=0 a3=1 items=0 ppid=2539 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:48.052000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 14:14:48.055065 kubelet[2539]: I1213 14:14:48.053760 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:14:48.055420 kubelet[2539]: I1213 14:14:48.055375 2539 policy_none.go:49] "None policy: Start" Dec 13 14:14:48.057362 kubelet[2539]: I1213 14:14:48.057327 2539 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:14:48.057575 kubelet[2539]: I1213 14:14:48.057552 2539 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:14:48.057000 audit[2563]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:14:48.057000 audit[2563]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc5299090 a2=0 a3=1 items=0 ppid=2539 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:48.057000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 14:14:48.059070 kubelet[2539]: E1213 14:14:48.058042 2539 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.163:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.163:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-163.1810c21d5d7de8bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-163,UID:ip-172-31-26-163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-163,},FirstTimestamp:2024-12-13 14:14:47.916013756 +0000 UTC m=+1.818882236,LastTimestamp:2024-12-13 14:14:47.916013756 +0000 UTC m=+1.818882236,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-163,}" Dec 13 14:14:48.059781 kubelet[2539]: I1213 14:14:48.059726 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:14:48.059781 kubelet[2539]: I1213 14:14:48.059767 2539 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:14:48.059939 kubelet[2539]: I1213 14:14:48.059798 2539 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:14:48.059939 kubelet[2539]: E1213 14:14:48.059892 2539 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:14:48.060510 kubelet[2539]: I1213 14:14:48.060477 2539 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-163" Dec 13 14:14:48.062181 kubelet[2539]: E1213 14:14:48.062122 2539 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.163:6443/api/v1/nodes\": dial tcp 172.31.26.163:6443: connect: connection refused" node="ip-172-31-26-163" Dec 13 14:14:48.062327 kubelet[2539]: W1213 14:14:48.062254 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.26.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:48.062327 kubelet[2539]: E1213 14:14:48.062299 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.26.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:48.062000 audit[2566]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=2566 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:14:48.062000 audit[2566]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff44b0650 a2=0 a3=1 items=0 ppid=2539 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:48.062000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 14:14:48.066000 audit[2567]: NETFILTER_CFG table=nat:33 family=10 entries=2 op=nft_register_chain pid=2567 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:14:48.066000 audit[2567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffeafb1bd0 a2=0 a3=1 items=0 ppid=2539 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:48.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 14:14:48.067000 audit[2564]: NETFILTER_CFG table=mangle:34 family=2 entries=1 op=nft_register_chain pid=2564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:48.067000 audit[2564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd7544e40 a2=0 a3=1 items=0 ppid=2539 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:48.067000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 14:14:48.073000 audit[2568]: NETFILTER_CFG table=filter:35 family=10 entries=2 op=nft_register_chain pid=2568 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:14:48.073000 audit[2568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffb557770 a2=0 a3=1 items=0 ppid=2539 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:48.073000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 14:14:48.077210 kubelet[2539]: I1213 14:14:48.077149 2539 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:14:48.076000 audit[2539]: AVC avc: denied { mac_admin } for pid=2539 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:14:48.076000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:14:48.076000 audit[2539]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000eedbf0 a1=4000ee1278 a2=4000eedbc0 a3=25 items=0 ppid=1 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:48.076000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:14:48.077791 kubelet[2539]: I1213 14:14:48.077341 2539 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 14:14:48.077791 kubelet[2539]: I1213 14:14:48.077672 2539 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:14:48.086000 audit[2569]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=2569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:48.086000 audit[2569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca1be5d0 a2=0 a3=1 items=0 ppid=2539 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:48.086000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 14:14:48.090342 kubelet[2539]: E1213 14:14:48.085382 2539 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-163\" not found" Dec 13 14:14:48.092000 audit[2570]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=2570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:14:48.092000 audit[2570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe7ad290 a2=0 a3=1 items=0 ppid=2539 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:48.092000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 14:14:48.160745 kubelet[2539]: I1213 14:14:48.160684 2539 topology_manager.go:215] "Topology Admit Handler" podUID="9a42a83e28e90ec64e3896b350c8a78d" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-26-163" Dec 13 14:14:48.162451 kubelet[2539]: I1213 14:14:48.162416 2539 topology_manager.go:215] "Topology Admit Handler" podUID="49d279f3d0e7f4fdf16c5b56b696a474" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-26-163" Dec 13 14:14:48.163164 kubelet[2539]: E1213 14:14:48.163121 2539 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-163?timeout=10s\": dial tcp 172.31.26.163:6443: connect: connection refused" interval="400ms" Dec 13 14:14:48.165032 kubelet[2539]: I1213 14:14:48.164976 2539 topology_manager.go:215] "Topology Admit Handler" podUID="197ee24df2fa788b67d4689968741292" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-26-163" Dec 13 14:14:48.258020 kubelet[2539]: I1213 14:14:48.257980 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a42a83e28e90ec64e3896b350c8a78d-ca-certs\") pod \"kube-apiserver-ip-172-31-26-163\" (UID: \"9a42a83e28e90ec64e3896b350c8a78d\") " pod="kube-system/kube-apiserver-ip-172-31-26-163" Dec 13 14:14:48.258306 kubelet[2539]: I1213 14:14:48.258283 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a42a83e28e90ec64e3896b350c8a78d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-163\" (UID: \"9a42a83e28e90ec64e3896b350c8a78d\") " pod="kube-system/kube-apiserver-ip-172-31-26-163" Dec 13 14:14:48.258502 kubelet[2539]: I1213 14:14:48.258479 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49d279f3d0e7f4fdf16c5b56b696a474-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-163\" (UID: \"49d279f3d0e7f4fdf16c5b56b696a474\") " pod="kube-system/kube-controller-manager-ip-172-31-26-163" Dec 13 14:14:48.258682 kubelet[2539]: I1213 14:14:48.258654 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49d279f3d0e7f4fdf16c5b56b696a474-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-163\" (UID: \"49d279f3d0e7f4fdf16c5b56b696a474\") " pod="kube-system/kube-controller-manager-ip-172-31-26-163" Dec 13 14:14:48.258851 kubelet[2539]: I1213 14:14:48.258815 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49d279f3d0e7f4fdf16c5b56b696a474-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-163\" (UID: \"49d279f3d0e7f4fdf16c5b56b696a474\") " pod="kube-system/kube-controller-manager-ip-172-31-26-163" Dec 13 14:14:48.259016 kubelet[2539]: I1213 14:14:48.258993 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a42a83e28e90ec64e3896b350c8a78d-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-163\" (UID: \"9a42a83e28e90ec64e3896b350c8a78d\") " pod="kube-system/kube-apiserver-ip-172-31-26-163" Dec 13 14:14:48.259172 kubelet[2539]: I1213 14:14:48.259151 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49d279f3d0e7f4fdf16c5b56b696a474-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-163\" (UID: \"49d279f3d0e7f4fdf16c5b56b696a474\") " pod="kube-system/kube-controller-manager-ip-172-31-26-163" Dec 13 14:14:48.259354 kubelet[2539]: I1213 14:14:48.259332 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49d279f3d0e7f4fdf16c5b56b696a474-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-163\" (UID: \"49d279f3d0e7f4fdf16c5b56b696a474\") " pod="kube-system/kube-controller-manager-ip-172-31-26-163" Dec 13 14:14:48.259514 kubelet[2539]: I1213 14:14:48.259493 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/197ee24df2fa788b67d4689968741292-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-163\" (UID: \"197ee24df2fa788b67d4689968741292\") " pod="kube-system/kube-scheduler-ip-172-31-26-163" Dec 13 14:14:48.264238 kubelet[2539]: I1213 14:14:48.264182 2539 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-163" Dec 13 14:14:48.264847 kubelet[2539]: E1213 14:14:48.264811 2539 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.163:6443/api/v1/nodes\": dial tcp 172.31.26.163:6443: connect: connection refused" node="ip-172-31-26-163" Dec 13 14:14:48.474670 env[1837]: time="2024-12-13T14:14:48.474175122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-163,Uid:9a42a83e28e90ec64e3896b350c8a78d,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:48.478063 env[1837]: time="2024-12-13T14:14:48.478003773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-163,Uid:197ee24df2fa788b67d4689968741292,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:48.481929 env[1837]: time="2024-12-13T14:14:48.481871016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-163,Uid:49d279f3d0e7f4fdf16c5b56b696a474,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:48.564458 kubelet[2539]: E1213 14:14:48.564409 2539 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-163?timeout=10s\": dial tcp 172.31.26.163:6443: connect: connection refused" interval="800ms" Dec 13 14:14:48.667742 kubelet[2539]: I1213 14:14:48.667673 2539 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-163" Dec 13 14:14:48.668179 kubelet[2539]: E1213 14:14:48.668143 2539 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.163:6443/api/v1/nodes\": dial tcp 172.31.26.163:6443: connect: connection refused" node="ip-172-31-26-163" Dec 13 14:14:48.987199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1945486809.mount: Deactivated successfully. Dec 13 14:14:48.994641 env[1837]: time="2024-12-13T14:14:48.994533700Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:49.002712 env[1837]: time="2024-12-13T14:14:49.002613492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:49.004771 env[1837]: time="2024-12-13T14:14:49.004707790Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:49.008317 env[1837]: time="2024-12-13T14:14:49.008250007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:49.010339 env[1837]: time="2024-12-13T14:14:49.010270871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:49.012330 env[1837]: time="2024-12-13T14:14:49.012268298Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:49.014267 env[1837]: time="2024-12-13T14:14:49.014181520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:49.021489 env[1837]: time="2024-12-13T14:14:49.021423064Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:49.029604 env[1837]: time="2024-12-13T14:14:49.029469781Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:49.032402 env[1837]: time="2024-12-13T14:14:49.032312953Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:49.035079 env[1837]: time="2024-12-13T14:14:49.034817877Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:49.054805 env[1837]: time="2024-12-13T14:14:49.054728858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:49.060877 env[1837]: time="2024-12-13T14:14:49.060757580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:49.061117 env[1837]: time="2024-12-13T14:14:49.060836466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:49.061117 env[1837]: time="2024-12-13T14:14:49.060864658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:49.061543 env[1837]: time="2024-12-13T14:14:49.061472654Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/18c0a87590eb28ad27e46e4c46e374a5b43e8bf2f1a0dc067f2acbef745df03b pid=2579 runtime=io.containerd.runc.v2 Dec 13 14:14:49.124513 env[1837]: time="2024-12-13T14:14:49.123133288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:49.124513 env[1837]: time="2024-12-13T14:14:49.123335569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:49.124513 env[1837]: time="2024-12-13T14:14:49.123443007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:49.124513 env[1837]: time="2024-12-13T14:14:49.123991232Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4081c058d68d99a993eed0316727ef9208d5f0c7f5f3936ab92b5929d0478ba4 pid=2610 runtime=io.containerd.runc.v2 Dec 13 14:14:49.162925 env[1837]: time="2024-12-13T14:14:49.162765600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:49.163294 env[1837]: time="2024-12-13T14:14:49.163197970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:49.163635 env[1837]: time="2024-12-13T14:14:49.163507341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:49.164526 env[1837]: time="2024-12-13T14:14:49.164373774Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3225b87fc3597b24d12ae9d60b82b2fe2331956d6c27d984fed8ecd8729e5c13 pid=2635 runtime=io.containerd.runc.v2 Dec 13 14:14:49.222399 env[1837]: time="2024-12-13T14:14:49.222317947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-163,Uid:9a42a83e28e90ec64e3896b350c8a78d,Namespace:kube-system,Attempt:0,} returns sandbox id \"18c0a87590eb28ad27e46e4c46e374a5b43e8bf2f1a0dc067f2acbef745df03b\"" Dec 13 14:14:49.258117 env[1837]: time="2024-12-13T14:14:49.251381016Z" level=info msg="CreateContainer within sandbox \"18c0a87590eb28ad27e46e4c46e374a5b43e8bf2f1a0dc067f2acbef745df03b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:14:49.266508 kubelet[2539]: W1213 14:14:49.266278 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.26.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-163&limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:49.266508 kubelet[2539]: E1213 14:14:49.266391 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.26.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-163&limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:49.304002 env[1837]: time="2024-12-13T14:14:49.303934432Z" level=info msg="CreateContainer within sandbox \"18c0a87590eb28ad27e46e4c46e374a5b43e8bf2f1a0dc067f2acbef745df03b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"439aa610405becb95f9e80daa4c0a3d00b8399842ca7099a1cd4ffc86641b647\"" Dec 13 14:14:49.308129 env[1837]: time="2024-12-13T14:14:49.308077007Z" level=info msg="StartContainer for \"439aa610405becb95f9e80daa4c0a3d00b8399842ca7099a1cd4ffc86641b647\"" Dec 13 14:14:49.334597 env[1837]: time="2024-12-13T14:14:49.334513930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-163,Uid:49d279f3d0e7f4fdf16c5b56b696a474,Namespace:kube-system,Attempt:0,} returns sandbox id \"3225b87fc3597b24d12ae9d60b82b2fe2331956d6c27d984fed8ecd8729e5c13\"" Dec 13 14:14:49.342787 env[1837]: time="2024-12-13T14:14:49.339698300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-163,Uid:197ee24df2fa788b67d4689968741292,Namespace:kube-system,Attempt:0,} returns sandbox id \"4081c058d68d99a993eed0316727ef9208d5f0c7f5f3936ab92b5929d0478ba4\"" Dec 13 14:14:49.343092 env[1837]: time="2024-12-13T14:14:49.341951559Z" level=info msg="CreateContainer within sandbox \"3225b87fc3597b24d12ae9d60b82b2fe2331956d6c27d984fed8ecd8729e5c13\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:14:49.350076 env[1837]: time="2024-12-13T14:14:49.350010150Z" level=info msg="CreateContainer within sandbox \"4081c058d68d99a993eed0316727ef9208d5f0c7f5f3936ab92b5929d0478ba4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:14:49.366469 kubelet[2539]: E1213 14:14:49.366331 2539 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-163?timeout=10s\": dial tcp 172.31.26.163:6443: connect: connection refused" interval="1.6s" Dec 13 14:14:49.368728 kubelet[2539]: W1213 14:14:49.366856 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.26.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:49.368728 kubelet[2539]: E1213 14:14:49.367066 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.26.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:49.401665 env[1837]: time="2024-12-13T14:14:49.397815073Z" level=info msg="CreateContainer within sandbox \"3225b87fc3597b24d12ae9d60b82b2fe2331956d6c27d984fed8ecd8729e5c13\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0a65327b1ad519891d82cb157f7c05d3b20566a11d6dead32e25adb40186e3a9\"" Dec 13 14:14:49.408533 env[1837]: time="2024-12-13T14:14:49.401967710Z" level=info msg="StartContainer for \"0a65327b1ad519891d82cb157f7c05d3b20566a11d6dead32e25adb40186e3a9\"" Dec 13 14:14:49.409889 env[1837]: time="2024-12-13T14:14:49.409778310Z" level=info msg="CreateContainer within sandbox \"4081c058d68d99a993eed0316727ef9208d5f0c7f5f3936ab92b5929d0478ba4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c04699f4541468a76420c69670190d96234fe42354fe990aeecd2c63982a8ee3\"" Dec 13 14:14:49.411644 kubelet[2539]: W1213 14:14:49.411487 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.26.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:49.411790 kubelet[2539]: E1213 14:14:49.411660 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.26.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:49.415318 env[1837]: time="2024-12-13T14:14:49.415015643Z" level=info msg="StartContainer for \"c04699f4541468a76420c69670190d96234fe42354fe990aeecd2c63982a8ee3\"" Dec 13 14:14:49.484657 kubelet[2539]: I1213 14:14:49.482015 2539 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-163" Dec 13 14:14:49.484657 kubelet[2539]: E1213 14:14:49.482842 2539 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.163:6443/api/v1/nodes\": dial tcp 172.31.26.163:6443: connect: connection refused" node="ip-172-31-26-163" Dec 13 14:14:49.498491 env[1837]: time="2024-12-13T14:14:49.497212519Z" level=info msg="StartContainer for \"439aa610405becb95f9e80daa4c0a3d00b8399842ca7099a1cd4ffc86641b647\" returns successfully" Dec 13 14:14:49.535570 kubelet[2539]: W1213 14:14:49.535360 2539 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.26.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:49.535570 kubelet[2539]: E1213 14:14:49.535479 2539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.26.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.163:6443: connect: connection refused Dec 13 14:14:49.624831 env[1837]: time="2024-12-13T14:14:49.624766547Z" level=info msg="StartContainer for \"0a65327b1ad519891d82cb157f7c05d3b20566a11d6dead32e25adb40186e3a9\" returns successfully" Dec 13 14:14:49.686042 env[1837]: time="2024-12-13T14:14:49.685959666Z" level=info msg="StartContainer for \"c04699f4541468a76420c69670190d96234fe42354fe990aeecd2c63982a8ee3\" returns successfully" Dec 13 14:14:51.085437 kubelet[2539]: I1213 14:14:51.085384 2539 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-163" Dec 13 14:14:51.439506 update_engine[1830]: I1213 14:14:51.438679 1830 update_attempter.cc:509] Updating boot flags... Dec 13 14:14:53.212473 kubelet[2539]: E1213 14:14:53.212405 2539 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-163\" not found" node="ip-172-31-26-163" Dec 13 14:14:53.251588 kubelet[2539]: I1213 14:14:53.251545 2539 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-26-163" Dec 13 14:14:53.918281 kubelet[2539]: I1213 14:14:53.918221 2539 apiserver.go:52] "Watching apiserver" Dec 13 14:14:53.956517 kubelet[2539]: I1213 14:14:53.956456 2539 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:14:56.342553 systemd[1]: Reloading. Dec 13 14:14:56.545905 /usr/lib/systemd/system-generators/torcx-generator[2932]: time="2024-12-13T14:14:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:14:56.550092 /usr/lib/systemd/system-generators/torcx-generator[2932]: time="2024-12-13T14:14:56Z" level=info msg="torcx already run" Dec 13 14:14:56.739580 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:14:56.739884 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:14:56.785292 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:14:57.043980 systemd[1]: Stopping kubelet.service... Dec 13 14:14:57.064470 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:14:57.073718 kernel: kauditd_printk_skb: 38 callbacks suppressed Dec 13 14:14:57.073845 kernel: audit: type=1131 audit(1734099297.063:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:57.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:57.065172 systemd[1]: Stopped kubelet.service. Dec 13 14:14:57.069665 systemd[1]: Starting kubelet.service... Dec 13 14:14:57.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:57.467713 systemd[1]: Started kubelet.service. Dec 13 14:14:57.484538 kernel: audit: type=1130 audit(1734099297.467:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:14:57.622168 kubelet[2999]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:57.623239 kubelet[2999]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:14:57.623492 kubelet[2999]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:57.623845 kubelet[2999]: I1213 14:14:57.623754 2999 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:14:57.633372 kubelet[2999]: I1213 14:14:57.633324 2999 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:14:57.633741 kubelet[2999]: I1213 14:14:57.633707 2999 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:14:57.634293 kubelet[2999]: I1213 14:14:57.634262 2999 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:14:57.637376 kubelet[2999]: I1213 14:14:57.637324 2999 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:14:57.662577 kubelet[2999]: I1213 14:14:57.662534 2999 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:14:57.684266 kubelet[2999]: I1213 14:14:57.684229 2999 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:14:57.685468 kubelet[2999]: I1213 14:14:57.685439 2999 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:14:57.685929 kubelet[2999]: I1213 14:14:57.685899 2999 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:14:57.686169 kubelet[2999]: I1213 14:14:57.686147 2999 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:14:57.686280 kubelet[2999]: I1213 14:14:57.686259 2999 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:14:57.686522 kubelet[2999]: I1213 14:14:57.686500 2999 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:57.686857 kubelet[2999]: I1213 14:14:57.686837 2999 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:14:57.688261 kubelet[2999]: I1213 14:14:57.688229 2999 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:14:57.688534 kubelet[2999]: I1213 14:14:57.688512 2999 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:14:57.692918 kubelet[2999]: I1213 14:14:57.692878 2999 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:14:57.714233 kubelet[2999]: I1213 14:14:57.714173 2999 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:14:57.724433 kubelet[2999]: I1213 14:14:57.724286 2999 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:14:57.734658 kubelet[2999]: I1213 14:14:57.728290 2999 server.go:1256] "Started kubelet" Dec 13 14:14:57.734996 kubelet[2999]: I1213 14:14:57.734960 2999 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:14:57.735529 kubelet[2999]: I1213 14:14:57.735499 2999 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:14:57.735790 kubelet[2999]: I1213 14:14:57.735770 2999 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:14:57.737187 kubelet[2999]: I1213 14:14:57.737153 2999 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:14:57.737000 audit[2999]: AVC avc: denied { mac_admin } for pid=2999 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:14:57.737000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:14:57.751214 kernel: audit: type=1400 audit(1734099297.737:232): avc: denied { mac_admin } for pid=2999 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:14:57.751373 kernel: audit: type=1401 audit(1734099297.737:232): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:14:57.751423 kubelet[2999]: I1213 14:14:57.749073 2999 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 14:14:57.737000 audit[2999]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c66300 a1=40009959e0 a2=4000c662d0 a3=25 items=0 ppid=1 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:57.753468 kubelet[2999]: I1213 14:14:57.753435 2999 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 14:14:57.753692 kubelet[2999]: I1213 14:14:57.753670 2999 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:14:57.758014 kubelet[2999]: E1213 14:14:57.757938 2999 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:14:57.762842 kernel: audit: type=1300 audit(1734099297.737:232): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c66300 a1=40009959e0 a2=4000c662d0 a3=25 items=0 ppid=1 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:57.737000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:14:57.764555 kubelet[2999]: I1213 14:14:57.764521 2999 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:14:57.766144 kubelet[2999]: I1213 14:14:57.766103 2999 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:14:57.767383 kubelet[2999]: I1213 14:14:57.767353 2999 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:14:57.774065 kubelet[2999]: I1213 14:14:57.774033 2999 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:14:57.774301 kernel: audit: type=1327 audit(1734099297.737:232): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:14:57.774577 kubelet[2999]: I1213 14:14:57.774543 2999 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:14:57.752000 audit[2999]: AVC avc: denied { mac_admin } for pid=2999 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:14:57.785368 kubelet[2999]: I1213 14:14:57.785334 2999 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:14:57.752000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:14:57.790448 kernel: audit: type=1400 audit(1734099297.752:233): avc: denied { mac_admin } for pid=2999 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:14:57.792292 kernel: audit: type=1401 audit(1734099297.752:233): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:14:57.792420 kernel: audit: type=1300 audit(1734099297.752:233): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a41f80 a1=4000995bf0 a2=4000c67050 a3=25 items=0 ppid=1 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:57.752000 audit[2999]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a41f80 a1=4000995bf0 a2=4000c67050 a3=25 items=0 ppid=1 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:57.752000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:14:57.834001 kernel: audit: type=1327 audit(1734099297.752:233): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:14:57.869582 kubelet[2999]: I1213 14:14:57.869383 2999 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-163" Dec 13 14:14:57.882840 kubelet[2999]: I1213 14:14:57.882510 2999 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:14:57.892377 kubelet[2999]: I1213 14:14:57.892330 2999 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:14:57.892601 kubelet[2999]: I1213 14:14:57.892576 2999 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:14:57.892779 kubelet[2999]: I1213 14:14:57.892757 2999 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:14:57.893172 kubelet[2999]: E1213 14:14:57.893137 2999 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:14:57.894937 kubelet[2999]: I1213 14:14:57.894901 2999 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-26-163" Dec 13 14:14:57.897936 kubelet[2999]: I1213 14:14:57.897878 2999 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-26-163" Dec 13 14:14:57.994846 kubelet[2999]: E1213 14:14:57.994726 2999 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:14:58.022436 kubelet[2999]: I1213 14:14:58.022401 2999 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:14:58.022674 kubelet[2999]: I1213 14:14:58.022652 2999 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:14:58.022842 kubelet[2999]: I1213 14:14:58.022777 2999 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:58.023345 kubelet[2999]: I1213 14:14:58.023321 2999 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:14:58.023501 kubelet[2999]: I1213 14:14:58.023479 2999 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:14:58.023678 kubelet[2999]: I1213 14:14:58.023611 2999 policy_none.go:49] "None policy: Start" Dec 13 14:14:58.025143 kubelet[2999]: I1213 14:14:58.025105 2999 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:14:58.025464 kubelet[2999]: I1213 14:14:58.025435 2999 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:14:58.026163 kubelet[2999]: I1213 14:14:58.026129 2999 state_mem.go:75] "Updated machine memory state" Dec 13 14:14:58.029562 kubelet[2999]: I1213 14:14:58.029524 2999 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:14:58.029913 kubelet[2999]: I1213 14:14:58.029884 2999 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 14:14:58.029000 audit[2999]: AVC avc: denied { mac_admin } for pid=2999 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:14:58.029000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:14:58.029000 audit[2999]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40010d4f90 a1=4000ead7e8 a2=40010d4f60 a3=25 items=0 ppid=1 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:14:58.029000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:14:58.036783 kubelet[2999]: I1213 14:14:58.033955 2999 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:14:58.196107 kubelet[2999]: I1213 14:14:58.196052 2999 topology_manager.go:215] "Topology Admit Handler" podUID="49d279f3d0e7f4fdf16c5b56b696a474" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-26-163" Dec 13 14:14:58.196496 kubelet[2999]: I1213 14:14:58.196459 2999 topology_manager.go:215] "Topology Admit Handler" podUID="197ee24df2fa788b67d4689968741292" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-26-163" Dec 13 14:14:58.196821 kubelet[2999]: I1213 14:14:58.196785 2999 topology_manager.go:215] "Topology Admit Handler" podUID="9a42a83e28e90ec64e3896b350c8a78d" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-26-163" Dec 13 14:14:58.206805 kubelet[2999]: E1213 14:14:58.206216 2999 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-26-163\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-163" Dec 13 14:14:58.206999 kubelet[2999]: E1213 14:14:58.206944 2999 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-26-163\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-163" Dec 13 14:14:58.270707 kubelet[2999]: I1213 14:14:58.270460 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49d279f3d0e7f4fdf16c5b56b696a474-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-163\" (UID: \"49d279f3d0e7f4fdf16c5b56b696a474\") " pod="kube-system/kube-controller-manager-ip-172-31-26-163" Dec 13 14:14:58.270875 kubelet[2999]: I1213 14:14:58.270727 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49d279f3d0e7f4fdf16c5b56b696a474-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-163\" (UID: \"49d279f3d0e7f4fdf16c5b56b696a474\") " pod="kube-system/kube-controller-manager-ip-172-31-26-163" Dec 13 14:14:58.270875 kubelet[2999]: I1213 14:14:58.270819 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49d279f3d0e7f4fdf16c5b56b696a474-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-163\" (UID: \"49d279f3d0e7f4fdf16c5b56b696a474\") " pod="kube-system/kube-controller-manager-ip-172-31-26-163" Dec 13 14:14:58.271001 kubelet[2999]: I1213 14:14:58.270932 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a42a83e28e90ec64e3896b350c8a78d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-163\" (UID: \"9a42a83e28e90ec64e3896b350c8a78d\") " pod="kube-system/kube-apiserver-ip-172-31-26-163" Dec 13 14:14:58.271091 kubelet[2999]: I1213 14:14:58.271038 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a42a83e28e90ec64e3896b350c8a78d-ca-certs\") pod \"kube-apiserver-ip-172-31-26-163\" (UID: \"9a42a83e28e90ec64e3896b350c8a78d\") " pod="kube-system/kube-apiserver-ip-172-31-26-163" Dec 13 14:14:58.271190 kubelet[2999]: I1213 14:14:58.271152 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a42a83e28e90ec64e3896b350c8a78d-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-163\" (UID: \"9a42a83e28e90ec64e3896b350c8a78d\") " pod="kube-system/kube-apiserver-ip-172-31-26-163" Dec 13 14:14:58.271328 kubelet[2999]: I1213 14:14:58.271252 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49d279f3d0e7f4fdf16c5b56b696a474-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-163\" (UID: \"49d279f3d0e7f4fdf16c5b56b696a474\") " pod="kube-system/kube-controller-manager-ip-172-31-26-163" Dec 13 14:14:58.271416 kubelet[2999]: I1213 14:14:58.271375 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49d279f3d0e7f4fdf16c5b56b696a474-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-163\" (UID: \"49d279f3d0e7f4fdf16c5b56b696a474\") " pod="kube-system/kube-controller-manager-ip-172-31-26-163" Dec 13 14:14:58.271519 kubelet[2999]: I1213 14:14:58.271483 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/197ee24df2fa788b67d4689968741292-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-163\" (UID: \"197ee24df2fa788b67d4689968741292\") " pod="kube-system/kube-scheduler-ip-172-31-26-163" Dec 13 14:14:58.694351 kubelet[2999]: I1213 14:14:58.694286 2999 apiserver.go:52] "Watching apiserver" Dec 13 14:14:58.767485 kubelet[2999]: I1213 14:14:58.767409 2999 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:14:58.984313 kubelet[2999]: I1213 14:14:58.984137 2999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-163" podStartSLOduration=0.983994537 podStartE2EDuration="983.994537ms" podCreationTimestamp="2024-12-13 14:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:58.929324909 +0000 UTC m=+1.433930725" watchObservedRunningTime="2024-12-13 14:14:58.983994537 +0000 UTC m=+1.488600377" Dec 13 14:14:58.989579 kubelet[2999]: E1213 14:14:58.989517 2999 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-26-163\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-26-163" Dec 13 14:14:59.036355 kubelet[2999]: I1213 14:14:59.036308 2999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-163" podStartSLOduration=3.036251433 podStartE2EDuration="3.036251433s" podCreationTimestamp="2024-12-13 14:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:58.984877672 +0000 UTC m=+1.489483680" watchObservedRunningTime="2024-12-13 14:14:59.036251433 +0000 UTC m=+1.540857201" Dec 13 14:14:59.095691 kubelet[2999]: I1213 14:14:59.095605 2999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-163" podStartSLOduration=4.095542151 podStartE2EDuration="4.095542151s" podCreationTimestamp="2024-12-13 14:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:59.037510193 +0000 UTC m=+1.542115985" watchObservedRunningTime="2024-12-13 14:14:59.095542151 +0000 UTC m=+1.600147907" Dec 13 14:15:02.631403 sudo[2159]: pam_unix(sudo:session): session closed for user root Dec 13 14:15:02.630000 audit[2159]: USER_END pid=2159 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:15:02.633977 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 13 14:15:02.634109 kernel: audit: type=1106 audit(1734099302.630:235): pid=2159 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:15:02.630000 audit[2159]: CRED_DISP pid=2159 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:15:02.649877 kernel: audit: type=1104 audit(1734099302.630:236): pid=2159 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:15:02.655503 sshd[2155]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:02.655000 audit[2155]: USER_END pid=2155 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:02.655000 audit[2155]: CRED_DISP pid=2155 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:02.678078 kernel: audit: type=1106 audit(1734099302.655:237): pid=2155 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:02.678201 kernel: audit: type=1104 audit(1734099302.655:238): pid=2155 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:02.678594 systemd[1]: sshd@6-172.31.26.163:22-139.178.89.65:39118.service: Deactivated successfully. Dec 13 14:15:02.680714 kernel: audit: type=1131 audit(1734099302.677:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.26.163:22-139.178.89.65:39118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:02.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.26.163:22-139.178.89.65:39118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:02.689556 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:15:02.689845 systemd-logind[1829]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:15:02.692417 systemd-logind[1829]: Removed session 7. Dec 13 14:15:04.467315 amazon-ssm-agent[1813]: 2024-12-13 14:15:04 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 14:15:09.463781 kubelet[2999]: I1213 14:15:09.463737 2999 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:15:09.465848 env[1837]: time="2024-12-13T14:15:09.465779455Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:15:09.467280 kubelet[2999]: I1213 14:15:09.467197 2999 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:15:10.224386 kubelet[2999]: I1213 14:15:10.224320 2999 topology_manager.go:215] "Topology Admit Handler" podUID="eaea3bf2-da54-43e9-b15f-1ff3d6c65fc9" podNamespace="kube-system" podName="kube-proxy-kprbd" Dec 13 14:15:10.249565 kubelet[2999]: I1213 14:15:10.249521 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eaea3bf2-da54-43e9-b15f-1ff3d6c65fc9-lib-modules\") pod \"kube-proxy-kprbd\" (UID: \"eaea3bf2-da54-43e9-b15f-1ff3d6c65fc9\") " pod="kube-system/kube-proxy-kprbd" Dec 13 14:15:10.249895 kubelet[2999]: I1213 14:15:10.249864 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eaea3bf2-da54-43e9-b15f-1ff3d6c65fc9-kube-proxy\") pod \"kube-proxy-kprbd\" (UID: \"eaea3bf2-da54-43e9-b15f-1ff3d6c65fc9\") " pod="kube-system/kube-proxy-kprbd" Dec 13 14:15:10.250167 kubelet[2999]: I1213 14:15:10.250136 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eaea3bf2-da54-43e9-b15f-1ff3d6c65fc9-xtables-lock\") pod \"kube-proxy-kprbd\" (UID: \"eaea3bf2-da54-43e9-b15f-1ff3d6c65fc9\") " pod="kube-system/kube-proxy-kprbd" Dec 13 14:15:10.250406 kubelet[2999]: I1213 14:15:10.250372 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcbs6\" (UniqueName: \"kubernetes.io/projected/eaea3bf2-da54-43e9-b15f-1ff3d6c65fc9-kube-api-access-tcbs6\") pod \"kube-proxy-kprbd\" (UID: \"eaea3bf2-da54-43e9-b15f-1ff3d6c65fc9\") " pod="kube-system/kube-proxy-kprbd" Dec 13 14:15:10.365995 kubelet[2999]: E1213 14:15:10.365952 2999 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 14:15:10.366266 kubelet[2999]: E1213 14:15:10.366244 2999 projected.go:200] Error preparing data for projected volume kube-api-access-tcbs6 for pod kube-system/kube-proxy-kprbd: configmap "kube-root-ca.crt" not found Dec 13 14:15:10.366559 kubelet[2999]: E1213 14:15:10.366507 2999 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaea3bf2-da54-43e9-b15f-1ff3d6c65fc9-kube-api-access-tcbs6 podName:eaea3bf2-da54-43e9-b15f-1ff3d6c65fc9 nodeName:}" failed. No retries permitted until 2024-12-13 14:15:10.866471069 +0000 UTC m=+13.371076837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tcbs6" (UniqueName: "kubernetes.io/projected/eaea3bf2-da54-43e9-b15f-1ff3d6c65fc9-kube-api-access-tcbs6") pod "kube-proxy-kprbd" (UID: "eaea3bf2-da54-43e9-b15f-1ff3d6c65fc9") : configmap "kube-root-ca.crt" not found Dec 13 14:15:10.569415 kubelet[2999]: I1213 14:15:10.569254 2999 topology_manager.go:215] "Topology Admit Handler" podUID="3a13095c-e412-45f1-a41a-3977688dff7b" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-n6v66" Dec 13 14:15:10.653464 kubelet[2999]: I1213 14:15:10.653419 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3a13095c-e412-45f1-a41a-3977688dff7b-var-lib-calico\") pod \"tigera-operator-c7ccbd65-n6v66\" (UID: \"3a13095c-e412-45f1-a41a-3977688dff7b\") " pod="tigera-operator/tigera-operator-c7ccbd65-n6v66" Dec 13 14:15:10.653854 kubelet[2999]: I1213 14:15:10.653820 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn59t\" (UniqueName: \"kubernetes.io/projected/3a13095c-e412-45f1-a41a-3977688dff7b-kube-api-access-vn59t\") pod \"tigera-operator-c7ccbd65-n6v66\" (UID: \"3a13095c-e412-45f1-a41a-3977688dff7b\") " pod="tigera-operator/tigera-operator-c7ccbd65-n6v66" Dec 13 14:15:10.879376 env[1837]: time="2024-12-13T14:15:10.878801315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-n6v66,Uid:3a13095c-e412-45f1-a41a-3977688dff7b,Namespace:tigera-operator,Attempt:0,}" Dec 13 14:15:10.919793 env[1837]: time="2024-12-13T14:15:10.919556676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:10.919983 env[1837]: time="2024-12-13T14:15:10.919809216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:10.919983 env[1837]: time="2024-12-13T14:15:10.919873035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:10.920289 env[1837]: time="2024-12-13T14:15:10.920173046Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e1aae40d73f3a75a6140988fa7b085d96ca0a237190298062117d2b0dfff84d pid=3085 runtime=io.containerd.runc.v2 Dec 13 14:15:10.966314 systemd[1]: run-containerd-runc-k8s.io-6e1aae40d73f3a75a6140988fa7b085d96ca0a237190298062117d2b0dfff84d-runc.vBgpNY.mount: Deactivated successfully. Dec 13 14:15:11.053817 env[1837]: time="2024-12-13T14:15:11.053712974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-n6v66,Uid:3a13095c-e412-45f1-a41a-3977688dff7b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6e1aae40d73f3a75a6140988fa7b085d96ca0a237190298062117d2b0dfff84d\"" Dec 13 14:15:11.059308 env[1837]: time="2024-12-13T14:15:11.059257162Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 14:15:11.134549 env[1837]: time="2024-12-13T14:15:11.133957245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kprbd,Uid:eaea3bf2-da54-43e9-b15f-1ff3d6c65fc9,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:11.165897 env[1837]: time="2024-12-13T14:15:11.165461992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:11.165897 env[1837]: time="2024-12-13T14:15:11.165558902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:11.165897 env[1837]: time="2024-12-13T14:15:11.165585932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:11.167815 env[1837]: time="2024-12-13T14:15:11.166444584Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa01d00c077782cac06f4dbf9ec4553e1bd24d10688a6fbd1aea7b3c5f376e0d pid=3127 runtime=io.containerd.runc.v2 Dec 13 14:15:11.252936 env[1837]: time="2024-12-13T14:15:11.252880628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kprbd,Uid:eaea3bf2-da54-43e9-b15f-1ff3d6c65fc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa01d00c077782cac06f4dbf9ec4553e1bd24d10688a6fbd1aea7b3c5f376e0d\"" Dec 13 14:15:11.259926 env[1837]: time="2024-12-13T14:15:11.259818863Z" level=info msg="CreateContainer within sandbox \"fa01d00c077782cac06f4dbf9ec4553e1bd24d10688a6fbd1aea7b3c5f376e0d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:15:11.288401 env[1837]: time="2024-12-13T14:15:11.288329092Z" level=info msg="CreateContainer within sandbox \"fa01d00c077782cac06f4dbf9ec4553e1bd24d10688a6fbd1aea7b3c5f376e0d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e6c69dda203ba66ccbb9f53be9a4f2d9fe8ae4e2c2342b8e02797e188ce3324c\"" Dec 13 14:15:11.290090 env[1837]: time="2024-12-13T14:15:11.290013990Z" level=info msg="StartContainer for \"e6c69dda203ba66ccbb9f53be9a4f2d9fe8ae4e2c2342b8e02797e188ce3324c\"" Dec 13 14:15:11.409138 env[1837]: time="2024-12-13T14:15:11.408446232Z" level=info msg="StartContainer for \"e6c69dda203ba66ccbb9f53be9a4f2d9fe8ae4e2c2342b8e02797e188ce3324c\" returns successfully" Dec 13 14:15:11.541000 audit[3219]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.541000 audit[3219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe5d92580 a2=0 a3=1 items=0 ppid=3178 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.548656 kernel: audit: type=1325 audit(1734099311.541:240): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:15:11.562916 kernel: audit: type=1300 audit(1734099311.541:240): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe5d92580 a2=0 a3=1 items=0 ppid=3178 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.542000 audit[3221]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.571673 kernel: audit: type=1327 audit(1734099311.541:240): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:15:11.542000 audit[3221]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffb98c80 a2=0 a3=1 items=0 ppid=3178 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.577763 kernel: audit: type=1325 audit(1734099311.542:241): table=nat:39 family=2 entries=1 op=nft_register_chain pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.542000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:15:11.593958 kernel: audit: type=1300 audit(1734099311.542:241): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffb98c80 a2=0 a3=1 items=0 ppid=3178 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.594099 kernel: audit: type=1327 audit(1734099311.542:241): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:15:11.542000 audit[3222]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.599556 kernel: audit: type=1325 audit(1734099311.542:242): table=filter:40 family=2 entries=1 op=nft_register_chain pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.542000 audit[3222]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe795d270 a2=0 a3=1 items=0 ppid=3178 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.610432 kernel: audit: type=1300 audit(1734099311.542:242): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe795d270 a2=0 a3=1 items=0 ppid=3178 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.542000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:15:11.616351 kernel: audit: type=1327 audit(1734099311.542:242): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:15:11.563000 audit[3220]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=3220 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.563000 audit[3220]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff42ceb30 a2=0 a3=1 items=0 ppid=3178 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.622674 kernel: audit: type=1325 audit(1734099311.563:243): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=3220 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.563000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:15:11.571000 audit[3223]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=3223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.571000 audit[3223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcfeb2790 a2=0 a3=1 items=0 ppid=3178 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.571000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:15:11.577000 audit[3224]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=3224 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.577000 audit[3224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe99a3370 a2=0 a3=1 items=0 ppid=3178 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.577000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:15:11.651000 audit[3225]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.651000 audit[3225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd474a8f0 a2=0 a3=1 items=0 ppid=3178 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.651000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 14:15:11.658000 audit[3227]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.658000 audit[3227]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff7af9120 a2=0 a3=1 items=0 ppid=3178 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.658000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 14:15:11.674000 audit[3230]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.674000 audit[3230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffdd5c7400 a2=0 a3=1 items=0 ppid=3178 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.674000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 14:15:11.677000 audit[3231]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.677000 audit[3231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffefd9b3c0 a2=0 a3=1 items=0 ppid=3178 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.677000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 14:15:11.685000 audit[3233]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.685000 audit[3233]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff4e5ce10 a2=0 a3=1 items=0 ppid=3178 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.685000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 14:15:11.688000 audit[3234]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.688000 audit[3234]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe26c8cd0 a2=0 a3=1 items=0 ppid=3178 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.688000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 14:15:11.694000 audit[3236]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.694000 audit[3236]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff6b9dee0 a2=0 a3=1 items=0 ppid=3178 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.694000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 14:15:11.702000 audit[3239]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3239 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.702000 audit[3239]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffee1eab0 a2=0 a3=1 items=0 ppid=3178 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 14:15:11.706000 audit[3240]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.706000 audit[3240]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9b4a760 a2=0 a3=1 items=0 ppid=3178 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 14:15:11.712000 audit[3242]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.712000 audit[3242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc6a2ff80 a2=0 a3=1 items=0 ppid=3178 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.712000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 14:15:11.716000 audit[3243]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3243 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.716000 audit[3243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6834600 a2=0 a3=1 items=0 ppid=3178 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.716000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 14:15:11.726000 audit[3245]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.726000 audit[3245]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe2db1c90 a2=0 a3=1 items=0 ppid=3178 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.726000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:15:11.736000 audit[3248]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.736000 audit[3248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe97b9090 a2=0 a3=1 items=0 ppid=3178 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.736000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:15:11.745000 audit[3251]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.745000 audit[3251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcff11e50 a2=0 a3=1 items=0 ppid=3178 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.745000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 14:15:11.750000 audit[3252]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.750000 audit[3252]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe7af0b30 a2=0 a3=1 items=0 ppid=3178 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 14:15:11.756000 audit[3254]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=3254 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.756000 audit[3254]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe5b59a00 a2=0 a3=1 items=0 ppid=3178 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.756000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:15:11.766000 audit[3257]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=3257 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.766000 audit[3257]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe586e2f0 a2=0 a3=1 items=0 ppid=3178 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:15:11.780000 audit[3258]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.780000 audit[3258]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd435c970 a2=0 a3=1 items=0 ppid=3178 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.780000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 14:15:11.794000 audit[3260]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3260 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:15:11.794000 audit[3260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffd8a02680 a2=0 a3=1 items=0 ppid=3178 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.794000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 14:15:11.831000 audit[3266]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:11.831000 audit[3266]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffd2e99120 a2=0 a3=1 items=0 ppid=3178 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.831000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:11.844000 audit[3266]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:11.844000 audit[3266]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffd2e99120 a2=0 a3=1 items=0 ppid=3178 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.844000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:11.847000 audit[3272]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=3272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.847000 audit[3272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd357ba60 a2=0 a3=1 items=0 ppid=3178 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 14:15:11.853000 audit[3274]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=3274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.853000 audit[3274]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe3111380 a2=0 a3=1 items=0 ppid=3178 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.853000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 14:15:11.861000 audit[3277]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=3277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.861000 audit[3277]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffffa47a4d0 a2=0 a3=1 items=0 ppid=3178 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.861000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 14:15:11.864000 audit[3278]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.864000 audit[3278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc33002f0 a2=0 a3=1 items=0 ppid=3178 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.864000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 14:15:11.871000 audit[3280]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.871000 audit[3280]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe38d8e20 a2=0 a3=1 items=0 ppid=3178 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.871000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 14:15:11.875000 audit[3281]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3281 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.875000 audit[3281]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca5e22b0 a2=0 a3=1 items=0 ppid=3178 pid=3281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.875000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 14:15:11.889000 audit[3283]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3283 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.889000 audit[3283]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe3d37cc0 a2=0 a3=1 items=0 ppid=3178 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.889000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 14:15:11.901000 audit[3286]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3286 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.901000 audit[3286]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffff4856780 a2=0 a3=1 items=0 ppid=3178 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.901000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 14:15:11.905000 audit[3287]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.905000 audit[3287]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7ed80b0 a2=0 a3=1 items=0 ppid=3178 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 14:15:11.911000 audit[3289]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3289 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.911000 audit[3289]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff7beddc0 a2=0 a3=1 items=0 ppid=3178 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.911000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 14:15:11.914000 audit[3290]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3290 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.914000 audit[3290]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcd7de0a0 a2=0 a3=1 items=0 ppid=3178 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.914000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 14:15:11.919000 audit[3292]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.919000 audit[3292]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe8b7c160 a2=0 a3=1 items=0 ppid=3178 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.919000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:15:11.927000 audit[3295]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.927000 audit[3295]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe6ed7f30 a2=0 a3=1 items=0 ppid=3178 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.927000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 14:15:11.938000 audit[3298]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3298 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.938000 audit[3298]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff736df90 a2=0 a3=1 items=0 ppid=3178 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.938000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 14:15:11.941000 audit[3299]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3299 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.941000 audit[3299]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdf4cbc60 a2=0 a3=1 items=0 ppid=3178 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.941000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 14:15:11.946000 audit[3301]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3301 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.946000 audit[3301]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe8b75390 a2=0 a3=1 items=0 ppid=3178 pid=3301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.946000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:15:11.961000 audit[3304]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3304 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.961000 audit[3304]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe381c110 a2=0 a3=1 items=0 ppid=3178 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:15:11.964000 audit[3305]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.964000 audit[3305]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffece277f0 a2=0 a3=1 items=0 ppid=3178 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.964000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 14:15:11.969000 audit[3307]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3307 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.969000 audit[3307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe37552c0 a2=0 a3=1 items=0 ppid=3178 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.969000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 14:15:11.972000 audit[3308]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3308 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.972000 audit[3308]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6c21a40 a2=0 a3=1 items=0 ppid=3178 pid=3308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.972000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 14:15:11.977000 audit[3310]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3310 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.977000 audit[3310]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc314d3a0 a2=0 a3=1 items=0 ppid=3178 pid=3310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:15:11.999000 audit[3313]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:15:11.999000 audit[3313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffff5a8ee0 a2=0 a3=1 items=0 ppid=3178 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:11.999000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:15:12.016000 audit[3315]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3315 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 14:15:12.016000 audit[3315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=ffffd01092b0 a2=0 a3=1 items=0 ppid=3178 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:12.016000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:12.017000 audit[3315]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3315 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 14:15:12.017000 audit[3315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd01092b0 a2=0 a3=1 items=0 ppid=3178 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:12.017000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:13.172683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount641322946.mount: Deactivated successfully. Dec 13 14:15:14.461789 env[1837]: time="2024-12-13T14:15:14.461730907Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:14.464848 env[1837]: time="2024-12-13T14:15:14.464795928Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:14.468232 env[1837]: time="2024-12-13T14:15:14.468170242Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:14.471210 env[1837]: time="2024-12-13T14:15:14.471113245Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:14.473964 env[1837]: time="2024-12-13T14:15:14.473888178Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 14:15:14.478507 env[1837]: time="2024-12-13T14:15:14.478444354Z" level=info msg="CreateContainer within sandbox \"6e1aae40d73f3a75a6140988fa7b085d96ca0a237190298062117d2b0dfff84d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 14:15:14.504117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176789220.mount: Deactivated successfully. Dec 13 14:15:14.517431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3321936252.mount: Deactivated successfully. Dec 13 14:15:14.525339 env[1837]: time="2024-12-13T14:15:14.525277680Z" level=info msg="CreateContainer within sandbox \"6e1aae40d73f3a75a6140988fa7b085d96ca0a237190298062117d2b0dfff84d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1710dccdff80addd75f8f23289752d3de778549cf515ccddd4cb48ee46663674\"" Dec 13 14:15:14.528310 env[1837]: time="2024-12-13T14:15:14.528240368Z" level=info msg="StartContainer for \"1710dccdff80addd75f8f23289752d3de778549cf515ccddd4cb48ee46663674\"" Dec 13 14:15:14.639088 env[1837]: time="2024-12-13T14:15:14.639001738Z" level=info msg="StartContainer for \"1710dccdff80addd75f8f23289752d3de778549cf515ccddd4cb48ee46663674\" returns successfully" Dec 13 14:15:15.024018 kubelet[2999]: I1213 14:15:15.023398 2999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kprbd" podStartSLOduration=5.023313401 podStartE2EDuration="5.023313401s" podCreationTimestamp="2024-12-13 14:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:15:12.005952674 +0000 UTC m=+14.510558454" watchObservedRunningTime="2024-12-13 14:15:15.023313401 +0000 UTC m=+17.527919181" Dec 13 14:15:17.916014 kubelet[2999]: I1213 14:15:17.915952 2999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-n6v66" podStartSLOduration=4.497491522 podStartE2EDuration="7.915891388s" podCreationTimestamp="2024-12-13 14:15:10 +0000 UTC" firstStartedPulling="2024-12-13 14:15:11.056065452 +0000 UTC m=+13.560671220" lastFinishedPulling="2024-12-13 14:15:14.474465342 +0000 UTC m=+16.979071086" observedRunningTime="2024-12-13 14:15:15.023970283 +0000 UTC m=+17.528576075" watchObservedRunningTime="2024-12-13 14:15:17.915891388 +0000 UTC m=+20.420497168" Dec 13 14:15:19.364000 audit[3356]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:19.367885 kernel: kauditd_printk_skb: 143 callbacks suppressed Dec 13 14:15:19.367996 kernel: audit: type=1325 audit(1734099319.364:291): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:19.364000 audit[3356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffcc0c9e70 a2=0 a3=1 items=0 ppid=3178 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:19.384874 kernel: audit: type=1300 audit(1734099319.364:291): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffcc0c9e70 a2=0 a3=1 items=0 ppid=3178 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:19.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:19.392095 kernel: audit: type=1327 audit(1734099319.364:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:19.384000 audit[3356]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:19.397698 kernel: audit: type=1325 audit(1734099319.384:292): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:19.384000 audit[3356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcc0c9e70 a2=0 a3=1 items=0 ppid=3178 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:19.408581 kernel: audit: type=1300 audit(1734099319.384:292): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcc0c9e70 a2=0 a3=1 items=0 ppid=3178 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:19.384000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:19.413746 kernel: audit: type=1327 audit(1734099319.384:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:19.426000 audit[3358]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:19.434664 kernel: audit: type=1325 audit(1734099319.426:293): table=filter:91 family=2 entries=16 op=nft_register_rule pid=3358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:19.426000 audit[3358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffc741ffe0 a2=0 a3=1 items=0 ppid=3178 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:19.426000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:19.451493 kernel: audit: type=1300 audit(1734099319.426:293): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffc741ffe0 a2=0 a3=1 items=0 ppid=3178 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:19.451665 kernel: audit: type=1327 audit(1734099319.426:293): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:19.454000 audit[3358]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:19.454000 audit[3358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc741ffe0 a2=0 a3=1 items=0 ppid=3178 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:19.454000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:19.463664 kernel: audit: type=1325 audit(1734099319.454:294): table=nat:92 family=2 entries=12 op=nft_register_rule pid=3358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:19.932130 kubelet[2999]: I1213 14:15:19.932062 2999 topology_manager.go:215] "Topology Admit Handler" podUID="5b4faaed-1369-4031-84ad-ab879c2d70c7" podNamespace="calico-system" podName="calico-typha-b7cdcbf85-gkn6h" Dec 13 14:15:20.024089 kubelet[2999]: I1213 14:15:20.024008 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44shd\" (UniqueName: \"kubernetes.io/projected/5b4faaed-1369-4031-84ad-ab879c2d70c7-kube-api-access-44shd\") pod \"calico-typha-b7cdcbf85-gkn6h\" (UID: \"5b4faaed-1369-4031-84ad-ab879c2d70c7\") " pod="calico-system/calico-typha-b7cdcbf85-gkn6h" Dec 13 14:15:20.024374 kubelet[2999]: I1213 14:15:20.024344 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5b4faaed-1369-4031-84ad-ab879c2d70c7-typha-certs\") pod \"calico-typha-b7cdcbf85-gkn6h\" (UID: \"5b4faaed-1369-4031-84ad-ab879c2d70c7\") " pod="calico-system/calico-typha-b7cdcbf85-gkn6h" Dec 13 14:15:20.024645 kubelet[2999]: I1213 14:15:20.024587 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b4faaed-1369-4031-84ad-ab879c2d70c7-tigera-ca-bundle\") pod \"calico-typha-b7cdcbf85-gkn6h\" (UID: \"5b4faaed-1369-4031-84ad-ab879c2d70c7\") " pod="calico-system/calico-typha-b7cdcbf85-gkn6h" Dec 13 14:15:20.096027 kubelet[2999]: I1213 14:15:20.095980 2999 topology_manager.go:215] "Topology Admit Handler" podUID="52388ff0-0bba-4189-a650-8a2e231fa4cd" podNamespace="calico-system" podName="calico-node-db756" Dec 13 14:15:20.129231 kubelet[2999]: I1213 14:15:20.129137 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52388ff0-0bba-4189-a650-8a2e231fa4cd-lib-modules\") pod \"calico-node-db756\" (UID: \"52388ff0-0bba-4189-a650-8a2e231fa4cd\") " pod="calico-system/calico-node-db756" Dec 13 14:15:20.129392 kubelet[2999]: I1213 14:15:20.129262 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/52388ff0-0bba-4189-a650-8a2e231fa4cd-cni-bin-dir\") pod \"calico-node-db756\" (UID: \"52388ff0-0bba-4189-a650-8a2e231fa4cd\") " pod="calico-system/calico-node-db756" Dec 13 14:15:20.129392 kubelet[2999]: I1213 14:15:20.129315 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/52388ff0-0bba-4189-a650-8a2e231fa4cd-cni-log-dir\") pod \"calico-node-db756\" (UID: \"52388ff0-0bba-4189-a650-8a2e231fa4cd\") " pod="calico-system/calico-node-db756" Dec 13 14:15:20.129548 kubelet[2999]: I1213 14:15:20.129381 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52388ff0-0bba-4189-a650-8a2e231fa4cd-tigera-ca-bundle\") pod \"calico-node-db756\" (UID: \"52388ff0-0bba-4189-a650-8a2e231fa4cd\") " pod="calico-system/calico-node-db756" Dec 13 14:15:20.129548 kubelet[2999]: I1213 14:15:20.129458 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/52388ff0-0bba-4189-a650-8a2e231fa4cd-var-lib-calico\") pod \"calico-node-db756\" (UID: \"52388ff0-0bba-4189-a650-8a2e231fa4cd\") " pod="calico-system/calico-node-db756" Dec 13 14:15:20.129548 kubelet[2999]: I1213 14:15:20.129525 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/52388ff0-0bba-4189-a650-8a2e231fa4cd-node-certs\") pod \"calico-node-db756\" (UID: \"52388ff0-0bba-4189-a650-8a2e231fa4cd\") " pod="calico-system/calico-node-db756" Dec 13 14:15:20.129776 kubelet[2999]: I1213 14:15:20.129612 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/52388ff0-0bba-4189-a650-8a2e231fa4cd-policysync\") pod \"calico-node-db756\" (UID: \"52388ff0-0bba-4189-a650-8a2e231fa4cd\") " pod="calico-system/calico-node-db756" Dec 13 14:15:20.133821 kubelet[2999]: I1213 14:15:20.133763 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xgbx\" (UniqueName: \"kubernetes.io/projected/52388ff0-0bba-4189-a650-8a2e231fa4cd-kube-api-access-7xgbx\") pod \"calico-node-db756\" (UID: \"52388ff0-0bba-4189-a650-8a2e231fa4cd\") " pod="calico-system/calico-node-db756" Dec 13 14:15:20.135224 kubelet[2999]: I1213 14:15:20.135169 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/52388ff0-0bba-4189-a650-8a2e231fa4cd-var-run-calico\") pod \"calico-node-db756\" (UID: \"52388ff0-0bba-4189-a650-8a2e231fa4cd\") " pod="calico-system/calico-node-db756" Dec 13 14:15:20.135379 kubelet[2999]: I1213 14:15:20.135261 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/52388ff0-0bba-4189-a650-8a2e231fa4cd-flexvol-driver-host\") pod \"calico-node-db756\" (UID: \"52388ff0-0bba-4189-a650-8a2e231fa4cd\") " pod="calico-system/calico-node-db756" Dec 13 14:15:20.135379 kubelet[2999]: I1213 14:15:20.135325 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52388ff0-0bba-4189-a650-8a2e231fa4cd-xtables-lock\") pod \"calico-node-db756\" (UID: \"52388ff0-0bba-4189-a650-8a2e231fa4cd\") " pod="calico-system/calico-node-db756" Dec 13 14:15:20.135506 kubelet[2999]: I1213 14:15:20.135387 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/52388ff0-0bba-4189-a650-8a2e231fa4cd-cni-net-dir\") pod \"calico-node-db756\" (UID: \"52388ff0-0bba-4189-a650-8a2e231fa4cd\") " pod="calico-system/calico-node-db756" Dec 13 14:15:20.238892 kubelet[2999]: E1213 14:15:20.237875 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.239181 kubelet[2999]: W1213 14:15:20.239125 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.239375 kubelet[2999]: E1213 14:15:20.239335 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.240034 kubelet[2999]: E1213 14:15:20.240004 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.240252 kubelet[2999]: W1213 14:15:20.240223 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.240403 kubelet[2999]: E1213 14:15:20.240380 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.243892 kubelet[2999]: E1213 14:15:20.243856 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.244105 kubelet[2999]: W1213 14:15:20.244074 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.249713 env[1837]: time="2024-12-13T14:15:20.245232600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b7cdcbf85-gkn6h,Uid:5b4faaed-1369-4031-84ad-ab879c2d70c7,Namespace:calico-system,Attempt:0,}" Dec 13 14:15:20.250313 kubelet[2999]: E1213 14:15:20.246065 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.250661 kubelet[2999]: E1213 14:15:20.250610 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.250848 kubelet[2999]: W1213 14:15:20.250819 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.251018 kubelet[2999]: E1213 14:15:20.250994 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.257543 kubelet[2999]: E1213 14:15:20.257505 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.257760 kubelet[2999]: W1213 14:15:20.257731 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.268597 kubelet[2999]: E1213 14:15:20.259285 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.271065 kubelet[2999]: E1213 14:15:20.271031 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.271281 kubelet[2999]: W1213 14:15:20.271253 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.287904 kubelet[2999]: E1213 14:15:20.287844 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.290674 kubelet[2999]: E1213 14:15:20.290612 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.290939 kubelet[2999]: W1213 14:15:20.290887 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.304522 kubelet[2999]: E1213 14:15:20.304484 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.304756 kubelet[2999]: W1213 14:15:20.304726 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.306433 kubelet[2999]: I1213 14:15:20.306396 2999 topology_manager.go:215] "Topology Admit Handler" podUID="83621a84-eb8a-4acb-be6b-37240d10ca28" podNamespace="calico-system" podName="csi-node-driver-79sgh" Dec 13 14:15:20.307469 kubelet[2999]: E1213 14:15:20.307433 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79sgh" podUID="83621a84-eb8a-4acb-be6b-37240d10ca28" Dec 13 14:15:20.309814 kubelet[2999]: E1213 14:15:20.309777 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.310083 kubelet[2999]: E1213 14:15:20.310053 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.310951 kubelet[2999]: E1213 14:15:20.310914 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.311188 kubelet[2999]: W1213 14:15:20.311154 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.319940 kubelet[2999]: E1213 14:15:20.315744 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.319940 kubelet[2999]: W1213 14:15:20.315786 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.319940 kubelet[2999]: E1213 14:15:20.316141 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.319940 kubelet[2999]: E1213 14:15:20.316184 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.319940 kubelet[2999]: E1213 14:15:20.317946 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.319940 kubelet[2999]: W1213 14:15:20.317973 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.319940 kubelet[2999]: E1213 14:15:20.319571 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.319940 kubelet[2999]: W1213 14:15:20.319602 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.328366 kubelet[2999]: E1213 14:15:20.324640 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.328366 kubelet[2999]: W1213 14:15:20.324702 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.328366 kubelet[2999]: E1213 14:15:20.325484 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.328366 kubelet[2999]: W1213 14:15:20.325510 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.328366 kubelet[2999]: E1213 14:15:20.325957 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.328366 kubelet[2999]: W1213 14:15:20.325977 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.328366 kubelet[2999]: E1213 14:15:20.326009 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.328366 kubelet[2999]: E1213 14:15:20.326317 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.328366 kubelet[2999]: W1213 14:15:20.326334 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.328366 kubelet[2999]: E1213 14:15:20.326363 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.328366 kubelet[2999]: E1213 14:15:20.326887 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.329100 kubelet[2999]: W1213 14:15:20.326907 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.329100 kubelet[2999]: E1213 14:15:20.327009 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.329100 kubelet[2999]: E1213 14:15:20.327060 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.329100 kubelet[2999]: E1213 14:15:20.327392 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.329100 kubelet[2999]: W1213 14:15:20.327410 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.329100 kubelet[2999]: E1213 14:15:20.327434 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.329100 kubelet[2999]: E1213 14:15:20.327788 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.329100 kubelet[2999]: W1213 14:15:20.327817 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.329100 kubelet[2999]: E1213 14:15:20.327843 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.329100 kubelet[2999]: E1213 14:15:20.327883 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.329654 kubelet[2999]: E1213 14:15:20.328332 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.329654 kubelet[2999]: W1213 14:15:20.328350 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.329654 kubelet[2999]: E1213 14:15:20.328375 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.329654 kubelet[2999]: E1213 14:15:20.328784 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.329654 kubelet[2999]: W1213 14:15:20.328802 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.329654 kubelet[2999]: E1213 14:15:20.328828 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.329654 kubelet[2999]: E1213 14:15:20.329105 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.329654 kubelet[2999]: W1213 14:15:20.329167 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.329654 kubelet[2999]: E1213 14:15:20.329211 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.329654 kubelet[2999]: E1213 14:15:20.329481 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.330173 kubelet[2999]: W1213 14:15:20.329496 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.330173 kubelet[2999]: E1213 14:15:20.329518 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.330173 kubelet[2999]: E1213 14:15:20.329830 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.330173 kubelet[2999]: W1213 14:15:20.329846 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.330173 kubelet[2999]: E1213 14:15:20.329868 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.330173 kubelet[2999]: E1213 14:15:20.329909 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.330173 kubelet[2999]: E1213 14:15:20.330160 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.330173 kubelet[2999]: W1213 14:15:20.330175 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.330584 kubelet[2999]: E1213 14:15:20.330197 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.330584 kubelet[2999]: E1213 14:15:20.330504 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.330584 kubelet[2999]: W1213 14:15:20.330520 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.330584 kubelet[2999]: E1213 14:15:20.330542 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.336416 kubelet[2999]: E1213 14:15:20.330870 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.336416 kubelet[2999]: W1213 14:15:20.330904 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.336416 kubelet[2999]: E1213 14:15:20.330943 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.336416 kubelet[2999]: E1213 14:15:20.330990 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.336416 kubelet[2999]: E1213 14:15:20.331400 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.336416 kubelet[2999]: W1213 14:15:20.331423 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.336416 kubelet[2999]: E1213 14:15:20.331456 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.336416 kubelet[2999]: E1213 14:15:20.332044 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.336416 kubelet[2999]: W1213 14:15:20.332090 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.336416 kubelet[2999]: E1213 14:15:20.332121 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.337095 kubelet[2999]: E1213 14:15:20.332655 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.337095 kubelet[2999]: W1213 14:15:20.332677 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.337095 kubelet[2999]: E1213 14:15:20.332704 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.337095 kubelet[2999]: E1213 14:15:20.333222 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.337095 kubelet[2999]: W1213 14:15:20.333269 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.337095 kubelet[2999]: E1213 14:15:20.333315 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.337095 kubelet[2999]: E1213 14:15:20.335646 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.337095 kubelet[2999]: W1213 14:15:20.335676 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.337095 kubelet[2999]: E1213 14:15:20.335736 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.337095 kubelet[2999]: E1213 14:15:20.336222 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.337682 kubelet[2999]: W1213 14:15:20.336279 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.337682 kubelet[2999]: E1213 14:15:20.336313 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.338566 kubelet[2999]: E1213 14:15:20.338490 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.338566 kubelet[2999]: W1213 14:15:20.338552 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.338779 kubelet[2999]: E1213 14:15:20.338591 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.345844 kubelet[2999]: E1213 14:15:20.345775 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.346306 kubelet[2999]: W1213 14:15:20.346232 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.352179 kubelet[2999]: E1213 14:15:20.352135 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.353071 kubelet[2999]: E1213 14:15:20.353038 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.354254 kubelet[2999]: W1213 14:15:20.354210 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.354505 kubelet[2999]: E1213 14:15:20.354467 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.355343 kubelet[2999]: E1213 14:15:20.355280 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.355709 kubelet[2999]: W1213 14:15:20.355676 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.356217 kubelet[2999]: E1213 14:15:20.355899 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.357040 kubelet[2999]: E1213 14:15:20.356980 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.357329 kubelet[2999]: W1213 14:15:20.357299 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.357651 kubelet[2999]: E1213 14:15:20.357592 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.358940 kubelet[2999]: E1213 14:15:20.358909 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.359173 kubelet[2999]: W1213 14:15:20.359145 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.359378 kubelet[2999]: E1213 14:15:20.359355 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.360441 kubelet[2999]: E1213 14:15:20.360408 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.360661 kubelet[2999]: W1213 14:15:20.360601 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.361708 kubelet[2999]: E1213 14:15:20.361673 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.362607 kubelet[2999]: E1213 14:15:20.362571 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.362885 kubelet[2999]: W1213 14:15:20.362854 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.363117 kubelet[2999]: E1213 14:15:20.363093 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.379477 kubelet[2999]: E1213 14:15:20.379092 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.379477 kubelet[2999]: W1213 14:15:20.379130 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.379477 kubelet[2999]: E1213 14:15:20.379183 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.383873 kubelet[2999]: E1213 14:15:20.383810 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.383873 kubelet[2999]: W1213 14:15:20.383851 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.387003 kubelet[2999]: E1213 14:15:20.384166 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.387153 env[1837]: time="2024-12-13T14:15:20.380590994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:20.387153 env[1837]: time="2024-12-13T14:15:20.380761833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:20.387153 env[1837]: time="2024-12-13T14:15:20.380790590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:20.387153 env[1837]: time="2024-12-13T14:15:20.381491468Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9712514c0e843dddbfdbea9e5f89965a0fec7ab70a29f1cd5863038bfecff44 pid=3417 runtime=io.containerd.runc.v2 Dec 13 14:15:20.390700 kubelet[2999]: E1213 14:15:20.390075 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.390700 kubelet[2999]: W1213 14:15:20.390114 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.390700 kubelet[2999]: E1213 14:15:20.390153 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.391347 kubelet[2999]: E1213 14:15:20.391021 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.391347 kubelet[2999]: W1213 14:15:20.391053 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.391347 kubelet[2999]: E1213 14:15:20.391086 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.391963 kubelet[2999]: E1213 14:15:20.391648 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.391963 kubelet[2999]: W1213 14:15:20.391671 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.391963 kubelet[2999]: E1213 14:15:20.391700 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.394286 kubelet[2999]: E1213 14:15:20.392222 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.394286 kubelet[2999]: W1213 14:15:20.392243 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.394286 kubelet[2999]: E1213 14:15:20.392269 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.395297 kubelet[2999]: E1213 14:15:20.394817 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.395297 kubelet[2999]: W1213 14:15:20.394851 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.395297 kubelet[2999]: E1213 14:15:20.394888 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.396285 kubelet[2999]: E1213 14:15:20.395843 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.396285 kubelet[2999]: W1213 14:15:20.395880 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.396285 kubelet[2999]: E1213 14:15:20.395918 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.396957 kubelet[2999]: E1213 14:15:20.396727 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.396957 kubelet[2999]: W1213 14:15:20.396756 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.396957 kubelet[2999]: E1213 14:15:20.396792 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.403258 kubelet[2999]: E1213 14:15:20.402994 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.403258 kubelet[2999]: W1213 14:15:20.403042 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.403258 kubelet[2999]: E1213 14:15:20.403082 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.403870 kubelet[2999]: E1213 14:15:20.403840 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.404032 kubelet[2999]: W1213 14:15:20.404003 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.404161 kubelet[2999]: E1213 14:15:20.404138 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.421026 env[1837]: time="2024-12-13T14:15:20.420949804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-db756,Uid:52388ff0-0bba-4189-a650-8a2e231fa4cd,Namespace:calico-system,Attempt:0,}" Dec 13 14:15:20.512051 kubelet[2999]: E1213 14:15:20.505594 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.512679 kubelet[2999]: W1213 14:15:20.512261 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.512679 kubelet[2999]: E1213 14:15:20.512342 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.512679 kubelet[2999]: I1213 14:15:20.512404 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/83621a84-eb8a-4acb-be6b-37240d10ca28-varrun\") pod \"csi-node-driver-79sgh\" (UID: \"83621a84-eb8a-4acb-be6b-37240d10ca28\") " pod="calico-system/csi-node-driver-79sgh" Dec 13 14:15:20.511000 audit[3459]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=3459 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:20.511000 audit[3459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6652 a0=3 a1=ffffe48b4e90 a2=0 a3=1 items=0 ppid=3178 pid=3459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:20.511000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:20.516413 kubelet[2999]: E1213 14:15:20.515812 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.516413 kubelet[2999]: W1213 14:15:20.515846 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.516413 kubelet[2999]: E1213 14:15:20.515896 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.516413 kubelet[2999]: I1213 14:15:20.515947 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxd54\" (UniqueName: \"kubernetes.io/projected/83621a84-eb8a-4acb-be6b-37240d10ca28-kube-api-access-wxd54\") pod \"csi-node-driver-79sgh\" (UID: \"83621a84-eb8a-4acb-be6b-37240d10ca28\") " pod="calico-system/csi-node-driver-79sgh" Dec 13 14:15:20.517656 kubelet[2999]: E1213 14:15:20.516868 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.517656 kubelet[2999]: W1213 14:15:20.516898 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.517656 kubelet[2999]: E1213 14:15:20.517120 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.517656 kubelet[2999]: I1213 14:15:20.517226 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/83621a84-eb8a-4acb-be6b-37240d10ca28-socket-dir\") pod \"csi-node-driver-79sgh\" (UID: \"83621a84-eb8a-4acb-be6b-37240d10ca28\") " pod="calico-system/csi-node-driver-79sgh" Dec 13 14:15:20.518535 kubelet[2999]: E1213 14:15:20.518255 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.518535 kubelet[2999]: W1213 14:15:20.518318 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.518535 kubelet[2999]: E1213 14:15:20.518399 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.519545 kubelet[2999]: E1213 14:15:20.519483 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.519806 kubelet[2999]: W1213 14:15:20.519748 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.520106 kubelet[2999]: E1213 14:15:20.520083 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.520855 kubelet[2999]: E1213 14:15:20.520792 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.521173 kubelet[2999]: W1213 14:15:20.521117 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.521496 kubelet[2999]: E1213 14:15:20.521470 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.521876 kubelet[2999]: I1213 14:15:20.521849 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/83621a84-eb8a-4acb-be6b-37240d10ca28-registration-dir\") pod \"csi-node-driver-79sgh\" (UID: \"83621a84-eb8a-4acb-be6b-37240d10ca28\") " pod="calico-system/csi-node-driver-79sgh" Dec 13 14:15:20.523908 kubelet[2999]: E1213 14:15:20.523871 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.524109 kubelet[2999]: W1213 14:15:20.524080 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.524490 kubelet[2999]: E1213 14:15:20.524434 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.524998 kubelet[2999]: E1213 14:15:20.524972 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.525157 kubelet[2999]: W1213 14:15:20.525130 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.525315 kubelet[2999]: E1213 14:15:20.525293 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.526035 kubelet[2999]: E1213 14:15:20.526003 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.526215 kubelet[2999]: W1213 14:15:20.526188 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.526376 kubelet[2999]: E1213 14:15:20.526353 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.526997 kubelet[2999]: E1213 14:15:20.526970 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.527197 kubelet[2999]: W1213 14:15:20.527169 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.527349 kubelet[2999]: E1213 14:15:20.527324 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.528860 kubelet[2999]: E1213 14:15:20.528792 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.529154 kubelet[2999]: W1213 14:15:20.529125 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.529947 kubelet[2999]: E1213 14:15:20.529527 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.530456 kubelet[2999]: I1213 14:15:20.530424 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83621a84-eb8a-4acb-be6b-37240d10ca28-kubelet-dir\") pod \"csi-node-driver-79sgh\" (UID: \"83621a84-eb8a-4acb-be6b-37240d10ca28\") " pod="calico-system/csi-node-driver-79sgh" Dec 13 14:15:20.529000 audit[3459]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3459 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:20.529000 audit[3459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe48b4e90 a2=0 a3=1 items=0 ppid=3178 pid=3459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:20.529000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:20.540497 kubelet[2999]: E1213 14:15:20.540455 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.540771 env[1837]: time="2024-12-13T14:15:20.536414805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:20.540771 env[1837]: time="2024-12-13T14:15:20.536489074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:20.540771 env[1837]: time="2024-12-13T14:15:20.536515563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:20.540771 env[1837]: time="2024-12-13T14:15:20.537435060Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2d2fded34efdcced69b9497477b0e3187f68dcc15f5c97a8ec617da07e7df07 pid=3466 runtime=io.containerd.runc.v2 Dec 13 14:15:20.541097 kubelet[2999]: W1213 14:15:20.540737 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.541276 kubelet[2999]: E1213 14:15:20.541240 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.542019 kubelet[2999]: E1213 14:15:20.541987 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.542360 kubelet[2999]: W1213 14:15:20.542324 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.542532 kubelet[2999]: E1213 14:15:20.542508 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.546537 kubelet[2999]: E1213 14:15:20.546500 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.546785 kubelet[2999]: W1213 14:15:20.546753 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.546940 kubelet[2999]: E1213 14:15:20.546915 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.547491 kubelet[2999]: E1213 14:15:20.547461 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.547781 kubelet[2999]: W1213 14:15:20.547749 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.547940 kubelet[2999]: E1213 14:15:20.547916 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.613031 env[1837]: time="2024-12-13T14:15:20.612972825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b7cdcbf85-gkn6h,Uid:5b4faaed-1369-4031-84ad-ab879c2d70c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"d9712514c0e843dddbfdbea9e5f89965a0fec7ab70a29f1cd5863038bfecff44\"" Dec 13 14:15:20.616732 env[1837]: time="2024-12-13T14:15:20.616599048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 14:15:20.646883 kubelet[2999]: E1213 14:15:20.646218 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.646883 kubelet[2999]: W1213 14:15:20.646276 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.646883 kubelet[2999]: E1213 14:15:20.646315 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.647785 kubelet[2999]: E1213 14:15:20.647323 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.647785 kubelet[2999]: W1213 14:15:20.647350 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.647785 kubelet[2999]: E1213 14:15:20.647416 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.651967 kubelet[2999]: E1213 14:15:20.651352 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.651967 kubelet[2999]: W1213 14:15:20.651385 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.651967 kubelet[2999]: E1213 14:15:20.651431 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.652691 kubelet[2999]: E1213 14:15:20.652376 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.652691 kubelet[2999]: W1213 14:15:20.652406 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.654524 kubelet[2999]: E1213 14:15:20.653526 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.657847 kubelet[2999]: E1213 14:15:20.657312 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.657847 kubelet[2999]: W1213 14:15:20.657344 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.657847 kubelet[2999]: E1213 14:15:20.657527 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.658746 kubelet[2999]: E1213 14:15:20.658238 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.658746 kubelet[2999]: W1213 14:15:20.658264 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.658746 kubelet[2999]: E1213 14:15:20.658475 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.659555 kubelet[2999]: E1213 14:15:20.659207 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.659555 kubelet[2999]: W1213 14:15:20.659235 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.659555 kubelet[2999]: E1213 14:15:20.659485 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.661747 kubelet[2999]: E1213 14:15:20.661709 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.662273 kubelet[2999]: W1213 14:15:20.662101 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.662684 kubelet[2999]: E1213 14:15:20.662575 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.664365 kubelet[2999]: E1213 14:15:20.664328 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.664754 kubelet[2999]: W1213 14:15:20.664691 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.665161 kubelet[2999]: E1213 14:15:20.665132 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.666297 kubelet[2999]: E1213 14:15:20.666208 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.667542 kubelet[2999]: W1213 14:15:20.667470 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.668014 kubelet[2999]: E1213 14:15:20.667979 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.670020 kubelet[2999]: E1213 14:15:20.669982 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.670288 kubelet[2999]: W1213 14:15:20.670253 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.670568 kubelet[2999]: E1213 14:15:20.670539 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.675937 kubelet[2999]: E1213 14:15:20.674746 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.675937 kubelet[2999]: W1213 14:15:20.674792 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.675937 kubelet[2999]: E1213 14:15:20.675197 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.675937 kubelet[2999]: W1213 14:15:20.675218 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.675937 kubelet[2999]: E1213 14:15:20.675505 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.675937 kubelet[2999]: W1213 14:15:20.675523 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.675937 kubelet[2999]: E1213 14:15:20.675837 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.675937 kubelet[2999]: W1213 14:15:20.675855 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.675937 kubelet[2999]: E1213 14:15:20.675911 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.683138 kubelet[2999]: E1213 14:15:20.677026 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.683138 kubelet[2999]: W1213 14:15:20.677051 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.683138 kubelet[2999]: E1213 14:15:20.677085 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.683138 kubelet[2999]: E1213 14:15:20.677418 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.683138 kubelet[2999]: E1213 14:15:20.677895 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.683138 kubelet[2999]: W1213 14:15:20.677918 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.683138 kubelet[2999]: E1213 14:15:20.677947 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.683138 kubelet[2999]: E1213 14:15:20.678233 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.683138 kubelet[2999]: W1213 14:15:20.678250 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.683138 kubelet[2999]: E1213 14:15:20.678275 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.683686 kubelet[2999]: E1213 14:15:20.678544 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.683686 kubelet[2999]: W1213 14:15:20.678576 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.683686 kubelet[2999]: E1213 14:15:20.678606 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.683686 kubelet[2999]: E1213 14:15:20.679055 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.683686 kubelet[2999]: W1213 14:15:20.679079 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.683686 kubelet[2999]: E1213 14:15:20.679108 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.683686 kubelet[2999]: E1213 14:15:20.679156 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.683686 kubelet[2999]: E1213 14:15:20.679505 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.683686 kubelet[2999]: W1213 14:15:20.679528 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.683686 kubelet[2999]: E1213 14:15:20.679556 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.684208 kubelet[2999]: E1213 14:15:20.680018 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.684208 kubelet[2999]: W1213 14:15:20.680049 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.684208 kubelet[2999]: E1213 14:15:20.680085 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.684208 kubelet[2999]: E1213 14:15:20.680461 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.684208 kubelet[2999]: W1213 14:15:20.680479 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.684208 kubelet[2999]: E1213 14:15:20.680505 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.684208 kubelet[2999]: E1213 14:15:20.680549 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.684208 kubelet[2999]: E1213 14:15:20.680869 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.684208 kubelet[2999]: W1213 14:15:20.680889 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.684208 kubelet[2999]: E1213 14:15:20.680922 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.684746 kubelet[2999]: E1213 14:15:20.681497 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.684746 kubelet[2999]: W1213 14:15:20.681524 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.684746 kubelet[2999]: E1213 14:15:20.681553 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.713951 kubelet[2999]: E1213 14:15:20.713914 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:20.714167 kubelet[2999]: W1213 14:15:20.714136 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:20.714330 kubelet[2999]: E1213 14:15:20.714307 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:20.733176 env[1837]: time="2024-12-13T14:15:20.733080824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-db756,Uid:52388ff0-0bba-4189-a650-8a2e231fa4cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2d2fded34efdcced69b9497477b0e3187f68dcc15f5c97a8ec617da07e7df07\"" Dec 13 14:15:21.894152 kubelet[2999]: E1213 14:15:21.893863 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79sgh" podUID="83621a84-eb8a-4acb-be6b-37240d10ca28" Dec 13 14:15:22.098737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount332640667.mount: Deactivated successfully. Dec 13 14:15:23.158963 env[1837]: time="2024-12-13T14:15:23.157817221Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:23.163921 env[1837]: time="2024-12-13T14:15:23.162390544Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:23.165684 env[1837]: time="2024-12-13T14:15:23.165569952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:23.168717 env[1837]: time="2024-12-13T14:15:23.168650560Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:23.169232 env[1837]: time="2024-12-13T14:15:23.169184962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 14:15:23.172578 env[1837]: time="2024-12-13T14:15:23.171333985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 14:15:23.204872 env[1837]: time="2024-12-13T14:15:23.204790275Z" level=info msg="CreateContainer within sandbox \"d9712514c0e843dddbfdbea9e5f89965a0fec7ab70a29f1cd5863038bfecff44\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 14:15:23.233288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2085602371.mount: Deactivated successfully. Dec 13 14:15:23.237583 env[1837]: time="2024-12-13T14:15:23.237505549Z" level=info msg="CreateContainer within sandbox \"d9712514c0e843dddbfdbea9e5f89965a0fec7ab70a29f1cd5863038bfecff44\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ceb55001bfe2d12ec427b7de20ca75944d604f98be5dd75c90437f8004e20772\"" Dec 13 14:15:23.238982 env[1837]: time="2024-12-13T14:15:23.238914982Z" level=info msg="StartContainer for \"ceb55001bfe2d12ec427b7de20ca75944d604f98be5dd75c90437f8004e20772\"" Dec 13 14:15:23.403096 env[1837]: time="2024-12-13T14:15:23.403026856Z" level=info msg="StartContainer for \"ceb55001bfe2d12ec427b7de20ca75944d604f98be5dd75c90437f8004e20772\" returns successfully" Dec 13 14:15:23.894186 kubelet[2999]: E1213 14:15:23.894137 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79sgh" podUID="83621a84-eb8a-4acb-be6b-37240d10ca28" Dec 13 14:15:24.033527 kubelet[2999]: E1213 14:15:24.033480 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.033928 kubelet[2999]: W1213 14:15:24.033893 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.034140 kubelet[2999]: E1213 14:15:24.034115 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.036170 kubelet[2999]: E1213 14:15:24.035259 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.036420 kubelet[2999]: W1213 14:15:24.036387 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.036612 kubelet[2999]: E1213 14:15:24.036588 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.045000 kubelet[2999]: E1213 14:15:24.041798 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.045000 kubelet[2999]: W1213 14:15:24.041849 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.045000 kubelet[2999]: E1213 14:15:24.041888 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.045000 kubelet[2999]: E1213 14:15:24.042357 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.045000 kubelet[2999]: W1213 14:15:24.042377 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.045000 kubelet[2999]: E1213 14:15:24.042404 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.045000 kubelet[2999]: E1213 14:15:24.042795 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.045000 kubelet[2999]: W1213 14:15:24.042826 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.045000 kubelet[2999]: E1213 14:15:24.042854 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.045000 kubelet[2999]: E1213 14:15:24.043184 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.045831 kubelet[2999]: W1213 14:15:24.043201 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.045831 kubelet[2999]: E1213 14:15:24.043227 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.045831 kubelet[2999]: E1213 14:15:24.043578 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.045831 kubelet[2999]: W1213 14:15:24.043609 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.045831 kubelet[2999]: E1213 14:15:24.043683 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.045831 kubelet[2999]: E1213 14:15:24.044048 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.045831 kubelet[2999]: W1213 14:15:24.044067 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.045831 kubelet[2999]: E1213 14:15:24.044095 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.045831 kubelet[2999]: E1213 14:15:24.044460 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.045831 kubelet[2999]: W1213 14:15:24.044529 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.046460 kubelet[2999]: E1213 14:15:24.044558 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.046460 kubelet[2999]: E1213 14:15:24.045021 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.046460 kubelet[2999]: W1213 14:15:24.045044 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.046460 kubelet[2999]: E1213 14:15:24.045072 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.046460 kubelet[2999]: E1213 14:15:24.045597 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.046460 kubelet[2999]: W1213 14:15:24.045631 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.046460 kubelet[2999]: E1213 14:15:24.045694 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.046460 kubelet[2999]: E1213 14:15:24.046084 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.046460 kubelet[2999]: W1213 14:15:24.046102 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.046460 kubelet[2999]: E1213 14:15:24.046127 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.047047 kubelet[2999]: E1213 14:15:24.046453 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.047047 kubelet[2999]: W1213 14:15:24.046469 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.047047 kubelet[2999]: E1213 14:15:24.046493 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.047047 kubelet[2999]: E1213 14:15:24.047044 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.047273 kubelet[2999]: W1213 14:15:24.047065 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.047273 kubelet[2999]: E1213 14:15:24.047091 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.047452 kubelet[2999]: E1213 14:15:24.047422 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.047452 kubelet[2999]: W1213 14:15:24.047448 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.047582 kubelet[2999]: E1213 14:15:24.047474 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.055576 kubelet[2999]: I1213 14:15:24.055510 2999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-b7cdcbf85-gkn6h" podStartSLOduration=2.501390822 podStartE2EDuration="5.055453834s" podCreationTimestamp="2024-12-13 14:15:19 +0000 UTC" firstStartedPulling="2024-12-13 14:15:20.615721283 +0000 UTC m=+23.120327039" lastFinishedPulling="2024-12-13 14:15:23.169784283 +0000 UTC m=+25.674390051" observedRunningTime="2024-12-13 14:15:24.055166458 +0000 UTC m=+26.559772250" watchObservedRunningTime="2024-12-13 14:15:24.055453834 +0000 UTC m=+26.560059602" Dec 13 14:15:24.108759 kubelet[2999]: E1213 14:15:24.108711 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.108759 kubelet[2999]: W1213 14:15:24.108749 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.109045 kubelet[2999]: E1213 14:15:24.108785 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.109252 kubelet[2999]: E1213 14:15:24.109214 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.109362 kubelet[2999]: W1213 14:15:24.109245 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.109362 kubelet[2999]: E1213 14:15:24.109308 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.109700 kubelet[2999]: E1213 14:15:24.109673 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.109803 kubelet[2999]: W1213 14:15:24.109699 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.109803 kubelet[2999]: E1213 14:15:24.109742 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.110135 kubelet[2999]: E1213 14:15:24.110109 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.110274 kubelet[2999]: W1213 14:15:24.110134 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.110274 kubelet[2999]: E1213 14:15:24.110173 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.110711 kubelet[2999]: E1213 14:15:24.110459 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.110711 kubelet[2999]: W1213 14:15:24.110485 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.110711 kubelet[2999]: E1213 14:15:24.110516 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.110910 kubelet[2999]: E1213 14:15:24.110841 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.110910 kubelet[2999]: W1213 14:15:24.110858 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.110910 kubelet[2999]: E1213 14:15:24.110884 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.111248 kubelet[2999]: E1213 14:15:24.111178 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.111248 kubelet[2999]: W1213 14:15:24.111246 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.111397 kubelet[2999]: E1213 14:15:24.111275 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.112004 kubelet[2999]: E1213 14:15:24.111977 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.112167 kubelet[2999]: W1213 14:15:24.112143 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.112792 kubelet[2999]: E1213 14:15:24.112758 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.113369 kubelet[2999]: E1213 14:15:24.113313 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.113584 kubelet[2999]: W1213 14:15:24.113558 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.113778 kubelet[2999]: E1213 14:15:24.113755 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.114361 kubelet[2999]: E1213 14:15:24.114152 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.114361 kubelet[2999]: W1213 14:15:24.114182 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.114361 kubelet[2999]: E1213 14:15:24.114226 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.114658 kubelet[2999]: E1213 14:15:24.114595 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.114658 kubelet[2999]: W1213 14:15:24.114638 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.114781 kubelet[2999]: E1213 14:15:24.114696 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.115002 kubelet[2999]: E1213 14:15:24.114975 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.115100 kubelet[2999]: W1213 14:15:24.115001 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.115216 kubelet[2999]: E1213 14:15:24.115192 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.115376 kubelet[2999]: E1213 14:15:24.115284 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.115488 kubelet[2999]: W1213 14:15:24.115465 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.115652 kubelet[2999]: E1213 14:15:24.115606 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.116020 kubelet[2999]: E1213 14:15:24.115993 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.116113 kubelet[2999]: W1213 14:15:24.116019 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.116113 kubelet[2999]: E1213 14:15:24.116058 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.117042 kubelet[2999]: E1213 14:15:24.116999 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.117281 kubelet[2999]: W1213 14:15:24.117253 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.117416 kubelet[2999]: E1213 14:15:24.117393 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.118000 kubelet[2999]: E1213 14:15:24.117972 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.118155 kubelet[2999]: W1213 14:15:24.118129 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.118281 kubelet[2999]: E1213 14:15:24.118259 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.121163 kubelet[2999]: E1213 14:15:24.121106 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.121163 kubelet[2999]: W1213 14:15:24.121151 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.121599 kubelet[2999]: E1213 14:15:24.121563 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:15:24.121599 kubelet[2999]: W1213 14:15:24.121595 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:15:24.121870 kubelet[2999]: E1213 14:15:24.121660 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.121870 kubelet[2999]: E1213 14:15:24.121728 2999 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:15:24.733818 env[1837]: time="2024-12-13T14:15:24.733708917Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:24.737718 env[1837]: time="2024-12-13T14:15:24.736350050Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:24.740705 env[1837]: time="2024-12-13T14:15:24.739610665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:24.742603 env[1837]: time="2024-12-13T14:15:24.742536798Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:24.743728 env[1837]: time="2024-12-13T14:15:24.743666184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 14:15:24.750340 env[1837]: time="2024-12-13T14:15:24.750236143Z" level=info msg="CreateContainer within sandbox \"e2d2fded34efdcced69b9497477b0e3187f68dcc15f5c97a8ec617da07e7df07\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 14:15:24.775790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921258555.mount: Deactivated successfully. Dec 13 14:15:24.781605 env[1837]: time="2024-12-13T14:15:24.781519619Z" level=info msg="CreateContainer within sandbox \"e2d2fded34efdcced69b9497477b0e3187f68dcc15f5c97a8ec617da07e7df07\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c47468e158f79b46ec3ce1fb17a6785bcafb228af8cbd29b13a60cef00091d46\"" Dec 13 14:15:24.784392 env[1837]: time="2024-12-13T14:15:24.784141393Z" level=info msg="StartContainer for \"c47468e158f79b46ec3ce1fb17a6785bcafb228af8cbd29b13a60cef00091d46\"" Dec 13 14:15:24.920027 env[1837]: time="2024-12-13T14:15:24.919902825Z" level=info msg="StartContainer for \"c47468e158f79b46ec3ce1fb17a6785bcafb228af8cbd29b13a60cef00091d46\" returns successfully" Dec 13 14:15:25.036507 kubelet[2999]: I1213 14:15:25.036361 2999 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:15:25.185815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c47468e158f79b46ec3ce1fb17a6785bcafb228af8cbd29b13a60cef00091d46-rootfs.mount: Deactivated successfully. Dec 13 14:15:25.367955 env[1837]: time="2024-12-13T14:15:25.367878752Z" level=info msg="shim disconnected" id=c47468e158f79b46ec3ce1fb17a6785bcafb228af8cbd29b13a60cef00091d46 Dec 13 14:15:25.368250 env[1837]: time="2024-12-13T14:15:25.367954988Z" level=warning msg="cleaning up after shim disconnected" id=c47468e158f79b46ec3ce1fb17a6785bcafb228af8cbd29b13a60cef00091d46 namespace=k8s.io Dec 13 14:15:25.368250 env[1837]: time="2024-12-13T14:15:25.367981861Z" level=info msg="cleaning up dead shim" Dec 13 14:15:25.382114 env[1837]: time="2024-12-13T14:15:25.382040872Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3678 runtime=io.containerd.runc.v2\n" Dec 13 14:15:25.894406 kubelet[2999]: E1213 14:15:25.894353 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79sgh" podUID="83621a84-eb8a-4acb-be6b-37240d10ca28" Dec 13 14:15:26.045406 env[1837]: time="2024-12-13T14:15:26.045305695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 14:15:27.895929 kubelet[2999]: E1213 14:15:27.895885 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79sgh" podUID="83621a84-eb8a-4acb-be6b-37240d10ca28" Dec 13 14:15:28.966588 kubelet[2999]: I1213 14:15:28.966506 2999 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:15:29.075212 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 13 14:15:29.075370 kernel: audit: type=1325 audit(1734099329.067:297): table=filter:95 family=2 entries=17 op=nft_register_rule pid=3693 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:29.067000 audit[3693]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=3693 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:29.086632 kernel: audit: type=1300 audit(1734099329.067:297): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffff902f3d0 a2=0 a3=1 items=0 ppid=3178 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:29.067000 audit[3693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffff902f3d0 a2=0 a3=1 items=0 ppid=3178 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:29.067000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:29.092538 kernel: audit: type=1327 audit(1734099329.067:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:29.093000 audit[3693]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3693 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:29.110944 kernel: audit: type=1325 audit(1734099329.093:298): table=nat:96 family=2 entries=19 op=nft_register_chain pid=3693 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:29.111091 kernel: audit: type=1300 audit(1734099329.093:298): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff902f3d0 a2=0 a3=1 items=0 ppid=3178 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:29.093000 audit[3693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff902f3d0 a2=0 a3=1 items=0 ppid=3178 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:29.093000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:29.116561 kernel: audit: type=1327 audit(1734099329.093:298): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:29.894700 kubelet[2999]: E1213 14:15:29.894558 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79sgh" podUID="83621a84-eb8a-4acb-be6b-37240d10ca28" Dec 13 14:15:30.964561 env[1837]: time="2024-12-13T14:15:30.964503718Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:30.969763 env[1837]: time="2024-12-13T14:15:30.969709727Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:30.972885 env[1837]: time="2024-12-13T14:15:30.972836172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:30.976444 env[1837]: time="2024-12-13T14:15:30.976372154Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:30.979524 env[1837]: time="2024-12-13T14:15:30.979453028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 14:15:30.986062 env[1837]: time="2024-12-13T14:15:30.986007595Z" level=info msg="CreateContainer within sandbox \"e2d2fded34efdcced69b9497477b0e3187f68dcc15f5c97a8ec617da07e7df07\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 14:15:31.019436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519627515.mount: Deactivated successfully. Dec 13 14:15:31.029106 env[1837]: time="2024-12-13T14:15:31.029040557Z" level=info msg="CreateContainer within sandbox \"e2d2fded34efdcced69b9497477b0e3187f68dcc15f5c97a8ec617da07e7df07\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"29432eb3c4d5e512146b06f6c66e23c850a24421c2f05a90845d057ba82597f1\"" Dec 13 14:15:31.031358 env[1837]: time="2024-12-13T14:15:31.031264774Z" level=info msg="StartContainer for \"29432eb3c4d5e512146b06f6c66e23c850a24421c2f05a90845d057ba82597f1\"" Dec 13 14:15:31.176376 env[1837]: time="2024-12-13T14:15:31.174689543Z" level=info msg="StartContainer for \"29432eb3c4d5e512146b06f6c66e23c850a24421c2f05a90845d057ba82597f1\" returns successfully" Dec 13 14:15:31.896615 kubelet[2999]: E1213 14:15:31.896573 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79sgh" podUID="83621a84-eb8a-4acb-be6b-37240d10ca28" Dec 13 14:15:32.456695 env[1837]: time="2024-12-13T14:15:32.456598783Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:15:32.486672 kubelet[2999]: I1213 14:15:32.486592 2999 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:15:32.525295 kubelet[2999]: I1213 14:15:32.525233 2999 topology_manager.go:215] "Topology Admit Handler" podUID="d999231e-24a7-47cf-8eea-96857833ff01" podNamespace="kube-system" podName="coredns-76f75df574-rbxmj" Dec 13 14:15:32.543362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29432eb3c4d5e512146b06f6c66e23c850a24421c2f05a90845d057ba82597f1-rootfs.mount: Deactivated successfully. Dec 13 14:15:32.546873 kubelet[2999]: I1213 14:15:32.544922 2999 topology_manager.go:215] "Topology Admit Handler" podUID="29176771-bd16-429b-96c6-cf2e38be6836" podNamespace="calico-apiserver" podName="calico-apiserver-bffc5dd46-5tznx" Dec 13 14:15:32.546873 kubelet[2999]: I1213 14:15:32.545188 2999 topology_manager.go:215] "Topology Admit Handler" podUID="ef634f66-b7a3-4b1f-99b3-8db2e225f26a" podNamespace="calico-apiserver" podName="calico-apiserver-bffc5dd46-2qrs2" Dec 13 14:15:32.546873 kubelet[2999]: I1213 14:15:32.545379 2999 topology_manager.go:215] "Topology Admit Handler" podUID="d93f59e9-cea4-4e42-99d4-3d89f412196e" podNamespace="kube-system" podName="coredns-76f75df574-nxsxq" Dec 13 14:15:32.555599 kubelet[2999]: I1213 14:15:32.550996 2999 topology_manager.go:215] "Topology Admit Handler" podUID="649c8cd1-1016-49a5-85ac-f55023619db6" podNamespace="calico-system" podName="calico-kube-controllers-54654bf745-9fq4l" Dec 13 14:15:32.577158 kubelet[2999]: I1213 14:15:32.573694 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d999231e-24a7-47cf-8eea-96857833ff01-config-volume\") pod \"coredns-76f75df574-rbxmj\" (UID: \"d999231e-24a7-47cf-8eea-96857833ff01\") " pod="kube-system/coredns-76f75df574-rbxmj" Dec 13 14:15:32.577158 kubelet[2999]: I1213 14:15:32.573800 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn5dk\" (UniqueName: \"kubernetes.io/projected/d999231e-24a7-47cf-8eea-96857833ff01-kube-api-access-zn5dk\") pod \"coredns-76f75df574-rbxmj\" (UID: \"d999231e-24a7-47cf-8eea-96857833ff01\") " pod="kube-system/coredns-76f75df574-rbxmj" Dec 13 14:15:32.674489 kubelet[2999]: I1213 14:15:32.674431 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/29176771-bd16-429b-96c6-cf2e38be6836-calico-apiserver-certs\") pod \"calico-apiserver-bffc5dd46-5tznx\" (UID: \"29176771-bd16-429b-96c6-cf2e38be6836\") " pod="calico-apiserver/calico-apiserver-bffc5dd46-5tznx" Dec 13 14:15:32.674703 kubelet[2999]: I1213 14:15:32.674515 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/649c8cd1-1016-49a5-85ac-f55023619db6-tigera-ca-bundle\") pod \"calico-kube-controllers-54654bf745-9fq4l\" (UID: \"649c8cd1-1016-49a5-85ac-f55023619db6\") " pod="calico-system/calico-kube-controllers-54654bf745-9fq4l" Dec 13 14:15:32.674703 kubelet[2999]: I1213 14:15:32.674573 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d93f59e9-cea4-4e42-99d4-3d89f412196e-config-volume\") pod \"coredns-76f75df574-nxsxq\" (UID: \"d93f59e9-cea4-4e42-99d4-3d89f412196e\") " pod="kube-system/coredns-76f75df574-nxsxq" Dec 13 14:15:32.674703 kubelet[2999]: I1213 14:15:32.674671 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4g92\" (UniqueName: \"kubernetes.io/projected/649c8cd1-1016-49a5-85ac-f55023619db6-kube-api-access-c4g92\") pod \"calico-kube-controllers-54654bf745-9fq4l\" (UID: \"649c8cd1-1016-49a5-85ac-f55023619db6\") " pod="calico-system/calico-kube-controllers-54654bf745-9fq4l" Dec 13 14:15:32.674934 kubelet[2999]: I1213 14:15:32.674847 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npb9h\" (UniqueName: \"kubernetes.io/projected/ef634f66-b7a3-4b1f-99b3-8db2e225f26a-kube-api-access-npb9h\") pod \"calico-apiserver-bffc5dd46-2qrs2\" (UID: \"ef634f66-b7a3-4b1f-99b3-8db2e225f26a\") " pod="calico-apiserver/calico-apiserver-bffc5dd46-2qrs2" Dec 13 14:15:32.675026 kubelet[2999]: I1213 14:15:32.674949 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ef634f66-b7a3-4b1f-99b3-8db2e225f26a-calico-apiserver-certs\") pod \"calico-apiserver-bffc5dd46-2qrs2\" (UID: \"ef634f66-b7a3-4b1f-99b3-8db2e225f26a\") " pod="calico-apiserver/calico-apiserver-bffc5dd46-2qrs2" Dec 13 14:15:32.675117 kubelet[2999]: I1213 14:15:32.675030 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gzc6\" (UniqueName: \"kubernetes.io/projected/29176771-bd16-429b-96c6-cf2e38be6836-kube-api-access-9gzc6\") pod \"calico-apiserver-bffc5dd46-5tznx\" (UID: \"29176771-bd16-429b-96c6-cf2e38be6836\") " pod="calico-apiserver/calico-apiserver-bffc5dd46-5tznx" Dec 13 14:15:32.675198 kubelet[2999]: I1213 14:15:32.675122 2999 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6ch4\" (UniqueName: \"kubernetes.io/projected/d93f59e9-cea4-4e42-99d4-3d89f412196e-kube-api-access-v6ch4\") pod \"coredns-76f75df574-nxsxq\" (UID: \"d93f59e9-cea4-4e42-99d4-3d89f412196e\") " pod="kube-system/coredns-76f75df574-nxsxq" Dec 13 14:15:32.910262 env[1837]: time="2024-12-13T14:15:32.910192821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rbxmj,Uid:d999231e-24a7-47cf-8eea-96857833ff01,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:32.910752 env[1837]: time="2024-12-13T14:15:32.910702824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bffc5dd46-5tznx,Uid:29176771-bd16-429b-96c6-cf2e38be6836,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:15:32.911651 env[1837]: time="2024-12-13T14:15:32.911579768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nxsxq,Uid:d93f59e9-cea4-4e42-99d4-3d89f412196e,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:32.913154 env[1837]: time="2024-12-13T14:15:32.912775827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bffc5dd46-2qrs2,Uid:ef634f66-b7a3-4b1f-99b3-8db2e225f26a,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:15:32.917726 env[1837]: time="2024-12-13T14:15:32.917349658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54654bf745-9fq4l,Uid:649c8cd1-1016-49a5-85ac-f55023619db6,Namespace:calico-system,Attempt:0,}" Dec 13 14:15:33.124457 env[1837]: time="2024-12-13T14:15:33.124353444Z" level=info msg="shim disconnected" id=29432eb3c4d5e512146b06f6c66e23c850a24421c2f05a90845d057ba82597f1 Dec 13 14:15:33.124457 env[1837]: time="2024-12-13T14:15:33.124450142Z" level=warning msg="cleaning up after shim disconnected" id=29432eb3c4d5e512146b06f6c66e23c850a24421c2f05a90845d057ba82597f1 namespace=k8s.io Dec 13 14:15:33.124823 env[1837]: time="2024-12-13T14:15:33.124475646Z" level=info msg="cleaning up dead shim" Dec 13 14:15:33.143062 env[1837]: time="2024-12-13T14:15:33.142988364Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3750 runtime=io.containerd.runc.v2\n" Dec 13 14:15:33.472462 env[1837]: time="2024-12-13T14:15:33.472357408Z" level=error msg="Failed to destroy network for sandbox \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.473253 env[1837]: time="2024-12-13T14:15:33.473045599Z" level=error msg="encountered an error cleaning up failed sandbox \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.473253 env[1837]: time="2024-12-13T14:15:33.473125771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bffc5dd46-2qrs2,Uid:ef634f66-b7a3-4b1f-99b3-8db2e225f26a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.473804 kubelet[2999]: E1213 14:15:33.473726 2999 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.476314 kubelet[2999]: E1213 14:15:33.473826 2999 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bffc5dd46-2qrs2" Dec 13 14:15:33.476314 kubelet[2999]: E1213 14:15:33.473868 2999 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bffc5dd46-2qrs2" Dec 13 14:15:33.476314 kubelet[2999]: E1213 14:15:33.473967 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bffc5dd46-2qrs2_calico-apiserver(ef634f66-b7a3-4b1f-99b3-8db2e225f26a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bffc5dd46-2qrs2_calico-apiserver(ef634f66-b7a3-4b1f-99b3-8db2e225f26a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bffc5dd46-2qrs2" podUID="ef634f66-b7a3-4b1f-99b3-8db2e225f26a" Dec 13 14:15:33.477151 env[1837]: time="2024-12-13T14:15:33.476954083Z" level=error msg="Failed to destroy network for sandbox \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.479088 env[1837]: time="2024-12-13T14:15:33.479012080Z" level=error msg="encountered an error cleaning up failed sandbox \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.479394 env[1837]: time="2024-12-13T14:15:33.479327269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nxsxq,Uid:d93f59e9-cea4-4e42-99d4-3d89f412196e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.480750 kubelet[2999]: E1213 14:15:33.480578 2999 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.482533 kubelet[2999]: E1213 14:15:33.480956 2999 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-nxsxq" Dec 13 14:15:33.482533 kubelet[2999]: E1213 14:15:33.481021 2999 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-nxsxq" Dec 13 14:15:33.482533 kubelet[2999]: E1213 14:15:33.481108 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-nxsxq_kube-system(d93f59e9-cea4-4e42-99d4-3d89f412196e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-nxsxq_kube-system(d93f59e9-cea4-4e42-99d4-3d89f412196e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-nxsxq" podUID="d93f59e9-cea4-4e42-99d4-3d89f412196e" Dec 13 14:15:33.497664 env[1837]: time="2024-12-13T14:15:33.497547349Z" level=error msg="Failed to destroy network for sandbox \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.498254 env[1837]: time="2024-12-13T14:15:33.498196822Z" level=error msg="encountered an error cleaning up failed sandbox \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.498422 env[1837]: time="2024-12-13T14:15:33.498277282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bffc5dd46-5tznx,Uid:29176771-bd16-429b-96c6-cf2e38be6836,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.500051 kubelet[2999]: E1213 14:15:33.498749 2999 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.500051 kubelet[2999]: E1213 14:15:33.498823 2999 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bffc5dd46-5tznx" Dec 13 14:15:33.500051 kubelet[2999]: E1213 14:15:33.498861 2999 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bffc5dd46-5tznx" Dec 13 14:15:33.500374 kubelet[2999]: E1213 14:15:33.498937 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bffc5dd46-5tznx_calico-apiserver(29176771-bd16-429b-96c6-cf2e38be6836)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bffc5dd46-5tznx_calico-apiserver(29176771-bd16-429b-96c6-cf2e38be6836)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bffc5dd46-5tznx" podUID="29176771-bd16-429b-96c6-cf2e38be6836" Dec 13 14:15:33.506472 env[1837]: time="2024-12-13T14:15:33.506385411Z" level=error msg="Failed to destroy network for sandbox \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.507091 env[1837]: time="2024-12-13T14:15:33.507030912Z" level=error msg="encountered an error cleaning up failed sandbox \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.507244 env[1837]: time="2024-12-13T14:15:33.507115669Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54654bf745-9fq4l,Uid:649c8cd1-1016-49a5-85ac-f55023619db6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.507768 kubelet[2999]: E1213 14:15:33.507709 2999 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.507899 kubelet[2999]: E1213 14:15:33.507832 2999 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54654bf745-9fq4l" Dec 13 14:15:33.507983 kubelet[2999]: E1213 14:15:33.507898 2999 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54654bf745-9fq4l" Dec 13 14:15:33.508061 kubelet[2999]: E1213 14:15:33.508020 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54654bf745-9fq4l_calico-system(649c8cd1-1016-49a5-85ac-f55023619db6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54654bf745-9fq4l_calico-system(649c8cd1-1016-49a5-85ac-f55023619db6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54654bf745-9fq4l" podUID="649c8cd1-1016-49a5-85ac-f55023619db6" Dec 13 14:15:33.515588 env[1837]: time="2024-12-13T14:15:33.515517396Z" level=error msg="Failed to destroy network for sandbox \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.516579 env[1837]: time="2024-12-13T14:15:33.516519157Z" level=error msg="encountered an error cleaning up failed sandbox \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.516842 env[1837]: time="2024-12-13T14:15:33.516780326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rbxmj,Uid:d999231e-24a7-47cf-8eea-96857833ff01,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.517319 kubelet[2999]: E1213 14:15:33.517258 2999 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:33.517437 kubelet[2999]: E1213 14:15:33.517393 2999 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rbxmj" Dec 13 14:15:33.517437 kubelet[2999]: E1213 14:15:33.517433 2999 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rbxmj" Dec 13 14:15:33.517609 kubelet[2999]: E1213 14:15:33.517587 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-rbxmj_kube-system(d999231e-24a7-47cf-8eea-96857833ff01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-rbxmj_kube-system(d999231e-24a7-47cf-8eea-96857833ff01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rbxmj" podUID="d999231e-24a7-47cf-8eea-96857833ff01" Dec 13 14:15:33.901403 env[1837]: time="2024-12-13T14:15:33.901333953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79sgh,Uid:83621a84-eb8a-4acb-be6b-37240d10ca28,Namespace:calico-system,Attempt:0,}" Dec 13 14:15:34.020570 env[1837]: time="2024-12-13T14:15:34.020496585Z" level=error msg="Failed to destroy network for sandbox \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:34.025557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14-shm.mount: Deactivated successfully. Dec 13 14:15:34.027861 env[1837]: time="2024-12-13T14:15:34.027784924Z" level=error msg="encountered an error cleaning up failed sandbox \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:34.028101 env[1837]: time="2024-12-13T14:15:34.028051926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79sgh,Uid:83621a84-eb8a-4acb-be6b-37240d10ca28,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:34.030028 kubelet[2999]: E1213 14:15:34.028525 2999 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:34.030028 kubelet[2999]: E1213 14:15:34.028615 2999 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-79sgh" Dec 13 14:15:34.030028 kubelet[2999]: E1213 14:15:34.028675 2999 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-79sgh" Dec 13 14:15:34.031092 kubelet[2999]: E1213 14:15:34.028761 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-79sgh_calico-system(83621a84-eb8a-4acb-be6b-37240d10ca28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-79sgh_calico-system(83621a84-eb8a-4acb-be6b-37240d10ca28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-79sgh" podUID="83621a84-eb8a-4acb-be6b-37240d10ca28" Dec 13 14:15:34.090674 kubelet[2999]: I1213 14:15:34.090607 2999 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Dec 13 14:15:34.092317 env[1837]: time="2024-12-13T14:15:34.092244455Z" level=info msg="StopPodSandbox for \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\"" Dec 13 14:15:34.095582 kubelet[2999]: I1213 14:15:34.095531 2999 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Dec 13 14:15:34.097077 env[1837]: time="2024-12-13T14:15:34.097014343Z" level=info msg="StopPodSandbox for \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\"" Dec 13 14:15:34.101927 kubelet[2999]: I1213 14:15:34.101035 2999 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Dec 13 14:15:34.102440 env[1837]: time="2024-12-13T14:15:34.102391590Z" level=info msg="StopPodSandbox for \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\"" Dec 13 14:15:34.116702 kubelet[2999]: I1213 14:15:34.116649 2999 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Dec 13 14:15:34.119580 env[1837]: time="2024-12-13T14:15:34.119492539Z" level=info msg="StopPodSandbox for \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\"" Dec 13 14:15:34.135545 env[1837]: time="2024-12-13T14:15:34.135446701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 14:15:34.139752 kubelet[2999]: I1213 14:15:34.139687 2999 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Dec 13 14:15:34.141400 env[1837]: time="2024-12-13T14:15:34.141335220Z" level=info msg="StopPodSandbox for \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\"" Dec 13 14:15:34.157048 kubelet[2999]: I1213 14:15:34.155494 2999 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Dec 13 14:15:34.157521 env[1837]: time="2024-12-13T14:15:34.157467392Z" level=info msg="StopPodSandbox for \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\"" Dec 13 14:15:34.238086 env[1837]: time="2024-12-13T14:15:34.238001186Z" level=error msg="StopPodSandbox for \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\" failed" error="failed to destroy network for sandbox \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:34.238355 kubelet[2999]: E1213 14:15:34.238314 2999 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Dec 13 14:15:34.238543 kubelet[2999]: E1213 14:15:34.238422 2999 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699"} Dec 13 14:15:34.238543 kubelet[2999]: E1213 14:15:34.238488 2999 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29176771-bd16-429b-96c6-cf2e38be6836\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:15:34.238765 kubelet[2999]: E1213 14:15:34.238550 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29176771-bd16-429b-96c6-cf2e38be6836\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bffc5dd46-5tznx" podUID="29176771-bd16-429b-96c6-cf2e38be6836" Dec 13 14:15:34.295161 env[1837]: time="2024-12-13T14:15:34.295081586Z" level=error msg="StopPodSandbox for \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\" failed" error="failed to destroy network for sandbox \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:34.295751 kubelet[2999]: E1213 14:15:34.295706 2999 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Dec 13 14:15:34.295912 kubelet[2999]: E1213 14:15:34.295776 2999 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9"} Dec 13 14:15:34.295912 kubelet[2999]: E1213 14:15:34.295841 2999 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d93f59e9-cea4-4e42-99d4-3d89f412196e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:15:34.295912 kubelet[2999]: E1213 14:15:34.295900 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d93f59e9-cea4-4e42-99d4-3d89f412196e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-nxsxq" podUID="d93f59e9-cea4-4e42-99d4-3d89f412196e" Dec 13 14:15:34.309816 env[1837]: time="2024-12-13T14:15:34.309740211Z" level=error msg="StopPodSandbox for \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\" failed" error="failed to destroy network for sandbox \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:34.310472 kubelet[2999]: E1213 14:15:34.310425 2999 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Dec 13 14:15:34.310688 kubelet[2999]: E1213 14:15:34.310522 2999 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb"} Dec 13 14:15:34.310688 kubelet[2999]: E1213 14:15:34.310663 2999 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef634f66-b7a3-4b1f-99b3-8db2e225f26a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:15:34.310902 kubelet[2999]: E1213 14:15:34.310746 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef634f66-b7a3-4b1f-99b3-8db2e225f26a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bffc5dd46-2qrs2" podUID="ef634f66-b7a3-4b1f-99b3-8db2e225f26a" Dec 13 14:15:34.336519 env[1837]: time="2024-12-13T14:15:34.336430467Z" level=error msg="StopPodSandbox for \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\" failed" error="failed to destroy network for sandbox \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:34.336855 kubelet[2999]: E1213 14:15:34.336790 2999 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Dec 13 14:15:34.336960 kubelet[2999]: E1213 14:15:34.336869 2999 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b"} Dec 13 14:15:34.336960 kubelet[2999]: E1213 14:15:34.336940 2999 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"649c8cd1-1016-49a5-85ac-f55023619db6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:15:34.337170 kubelet[2999]: E1213 14:15:34.336994 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"649c8cd1-1016-49a5-85ac-f55023619db6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54654bf745-9fq4l" podUID="649c8cd1-1016-49a5-85ac-f55023619db6" Dec 13 14:15:34.339479 env[1837]: time="2024-12-13T14:15:34.339403171Z" level=error msg="StopPodSandbox for \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\" failed" error="failed to destroy network for sandbox \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:34.340078 kubelet[2999]: E1213 14:15:34.340024 2999 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Dec 13 14:15:34.340202 kubelet[2999]: E1213 14:15:34.340092 2999 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14"} Dec 13 14:15:34.340202 kubelet[2999]: E1213 14:15:34.340167 2999 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83621a84-eb8a-4acb-be6b-37240d10ca28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:15:34.340405 kubelet[2999]: E1213 14:15:34.340222 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83621a84-eb8a-4acb-be6b-37240d10ca28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-79sgh" podUID="83621a84-eb8a-4acb-be6b-37240d10ca28" Dec 13 14:15:34.358086 env[1837]: time="2024-12-13T14:15:34.358001322Z" level=error msg="StopPodSandbox for \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\" failed" error="failed to destroy network for sandbox \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:15:34.358444 kubelet[2999]: E1213 14:15:34.358407 2999 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Dec 13 14:15:34.358563 kubelet[2999]: E1213 14:15:34.358481 2999 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673"} Dec 13 14:15:34.358563 kubelet[2999]: E1213 14:15:34.358545 2999 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d999231e-24a7-47cf-8eea-96857833ff01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:15:34.358770 kubelet[2999]: E1213 14:15:34.358600 2999 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d999231e-24a7-47cf-8eea-96857833ff01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rbxmj" podUID="d999231e-24a7-47cf-8eea-96857833ff01" Dec 13 14:15:42.426836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211606874.mount: Deactivated successfully. Dec 13 14:15:42.519411 env[1837]: time="2024-12-13T14:15:42.519350633Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:42.522612 env[1837]: time="2024-12-13T14:15:42.522563096Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:42.525336 env[1837]: time="2024-12-13T14:15:42.525289845Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:42.527769 env[1837]: time="2024-12-13T14:15:42.527707296Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:42.528949 env[1837]: time="2024-12-13T14:15:42.528885329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 14:15:42.557993 env[1837]: time="2024-12-13T14:15:42.557913818Z" level=info msg="CreateContainer within sandbox \"e2d2fded34efdcced69b9497477b0e3187f68dcc15f5c97a8ec617da07e7df07\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 14:15:42.590380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446336622.mount: Deactivated successfully. Dec 13 14:15:42.592778 env[1837]: time="2024-12-13T14:15:42.592709542Z" level=info msg="CreateContainer within sandbox \"e2d2fded34efdcced69b9497477b0e3187f68dcc15f5c97a8ec617da07e7df07\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"aebae663922ce25f57c507da18de16c321c4c3e464506af3b001c3edf0ce78df\"" Dec 13 14:15:42.595053 env[1837]: time="2024-12-13T14:15:42.594998748Z" level=info msg="StartContainer for \"aebae663922ce25f57c507da18de16c321c4c3e464506af3b001c3edf0ce78df\"" Dec 13 14:15:42.716963 env[1837]: time="2024-12-13T14:15:42.716814857Z" level=info msg="StartContainer for \"aebae663922ce25f57c507da18de16c321c4c3e464506af3b001c3edf0ce78df\" returns successfully" Dec 13 14:15:42.845331 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 14:15:42.845501 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 14:15:44.404000 audit[4168]: AVC avc: denied { write } for pid=4168 comm="tee" name="fd" dev="proc" ino=21760 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:15:44.412831 kernel: audit: type=1400 audit(1734099344.404:299): avc: denied { write } for pid=4168 comm="tee" name="fd" dev="proc" ino=21760 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:15:44.404000 audit[4168]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffee6bea0d a2=241 a3=1b6 items=1 ppid=4147 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:44.404000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 14:15:44.436122 kernel: audit: type=1300 audit(1734099344.404:299): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffee6bea0d a2=241 a3=1b6 items=1 ppid=4147 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:44.436287 kernel: audit: type=1307 audit(1734099344.404:299): cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 14:15:44.404000 audit: PATH item=0 name="/dev/fd/63" inode=21742 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:15:44.445988 kernel: audit: type=1302 audit(1734099344.404:299): item=0 name="/dev/fd/63" inode=21742 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:15:44.404000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:15:44.463099 kernel: audit: type=1327 audit(1734099344.404:299): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:15:44.431000 audit[4190]: AVC avc: denied { write } for pid=4190 comm="tee" name="fd" dev="proc" ino=20925 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:15:44.470826 kernel: audit: type=1400 audit(1734099344.431:300): avc: denied { write } for pid=4190 comm="tee" name="fd" dev="proc" ino=20925 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:15:44.431000 audit[4190]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff9f56a1e a2=241 a3=1b6 items=1 ppid=4151 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:44.482340 kernel: audit: type=1300 audit(1734099344.431:300): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff9f56a1e a2=241 a3=1b6 items=1 ppid=4151 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:44.431000 audit: CWD cwd="/etc/service/enabled/cni/log" Dec 13 14:15:44.489956 kernel: audit: type=1307 audit(1734099344.431:300): cwd="/etc/service/enabled/cni/log" Dec 13 14:15:44.431000 audit: PATH item=0 name="/dev/fd/63" inode=20922 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:15:44.500577 kernel: audit: type=1302 audit(1734099344.431:300): item=0 name="/dev/fd/63" inode=20922 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:15:44.431000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:15:44.519944 kernel: audit: type=1327 audit(1734099344.431:300): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:15:44.466000 audit[4186]: AVC avc: denied { write } for pid=4186 comm="tee" name="fd" dev="proc" ino=21772 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:15:44.466000 audit[4186]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff112ba1d a2=241 a3=1b6 items=1 ppid=4149 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:44.466000 audit: CWD cwd="/etc/service/enabled/bird/log" Dec 13 14:15:44.466000 audit: PATH item=0 name="/dev/fd/63" inode=21761 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:15:44.466000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:15:44.492000 audit[4182]: AVC avc: denied { write } for pid=4182 comm="tee" name="fd" dev="proc" ino=21776 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:15:44.492000 audit[4182]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffefe1ca1c a2=241 a3=1b6 items=1 ppid=4141 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:44.492000 audit: CWD cwd="/etc/service/enabled/felix/log" Dec 13 14:15:44.492000 audit: PATH item=0 name="/dev/fd/63" inode=21753 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:15:44.492000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:15:44.515000 audit[4201]: AVC avc: denied { write } for pid=4201 comm="tee" name="fd" dev="proc" ino=21784 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:15:44.515000 audit[4201]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffea382a0c a2=241 a3=1b6 items=1 ppid=4143 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:44.515000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 14:15:44.515000 audit: PATH item=0 name="/dev/fd/63" inode=21769 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:15:44.515000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:15:44.571000 audit[4211]: AVC avc: denied { write } for pid=4211 comm="tee" name="fd" dev="proc" ino=21791 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:15:44.571000 audit[4211]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe782ca1c a2=241 a3=1b6 items=1 ppid=4144 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:44.571000 audit: CWD cwd="/etc/service/enabled/bird6/log" Dec 13 14:15:44.571000 audit: PATH item=0 name="/dev/fd/63" inode=20932 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:15:44.571000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:15:44.588000 audit[4209]: AVC avc: denied { write } for pid=4209 comm="tee" name="fd" dev="proc" ino=20948 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:15:44.588000 audit[4209]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe10faa1c a2=241 a3=1b6 items=1 ppid=4148 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:44.588000 audit: CWD cwd="/etc/service/enabled/confd/log" Dec 13 14:15:44.588000 audit: PATH item=0 name="/dev/fd/63" inode=20931 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:15:44.588000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:15:44.893000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.893000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.893000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.893000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.893000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.893000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.893000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.893000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.893000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.893000 audit: BPF prog-id=10 op=LOAD Dec 13 14:15:44.893000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf978988 a2=98 a3=ffffcf978978 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:44.893000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:44.894000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:15:44.894000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.894000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.894000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.900980 env[1837]: time="2024-12-13T14:15:44.895512567Z" level=info msg="StopPodSandbox for \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\"" Dec 13 14:15:44.894000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.894000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.894000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.894000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.894000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.894000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.894000 audit: BPF prog-id=11 op=LOAD Dec 13 14:15:44.894000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcf978618 a2=74 a3=95 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:44.894000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:44.900000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:15:44.900000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.900000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.900000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.900000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.900000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.900000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.900000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.900000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.900000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:44.900000 audit: BPF prog-id=12 op=LOAD Dec 13 14:15:44.900000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcf978678 a2=94 a3=2 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:44.900000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:44.901000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:15:45.092021 kubelet[2999]: I1213 14:15:45.091113 2999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-db756" podStartSLOduration=3.298666427 podStartE2EDuration="25.091008673s" podCreationTimestamp="2024-12-13 14:15:20 +0000 UTC" firstStartedPulling="2024-12-13 14:15:20.736995123 +0000 UTC m=+23.241600879" lastFinishedPulling="2024-12-13 14:15:42.529337369 +0000 UTC m=+45.033943125" observedRunningTime="2024-12-13 14:15:43.218494443 +0000 UTC m=+45.723100247" watchObservedRunningTime="2024-12-13 14:15:45.091008673 +0000 UTC m=+47.595614441" Dec 13 14:15:45.187565 env[1837]: 2024-12-13 14:15:45.093 [INFO][4264] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Dec 13 14:15:45.187565 env[1837]: 2024-12-13 14:15:45.093 [INFO][4264] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" iface="eth0" netns="/var/run/netns/cni-faebb0dd-95d7-3fe2-9456-a4671ab856d7" Dec 13 14:15:45.187565 env[1837]: 2024-12-13 14:15:45.093 [INFO][4264] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" iface="eth0" netns="/var/run/netns/cni-faebb0dd-95d7-3fe2-9456-a4671ab856d7" Dec 13 14:15:45.187565 env[1837]: 2024-12-13 14:15:45.094 [INFO][4264] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" iface="eth0" netns="/var/run/netns/cni-faebb0dd-95d7-3fe2-9456-a4671ab856d7" Dec 13 14:15:45.187565 env[1837]: 2024-12-13 14:15:45.094 [INFO][4264] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Dec 13 14:15:45.187565 env[1837]: 2024-12-13 14:15:45.094 [INFO][4264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Dec 13 14:15:45.187565 env[1837]: 2024-12-13 14:15:45.155 [INFO][4272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" HandleID="k8s-pod-network.02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Workload="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:45.187565 env[1837]: 2024-12-13 14:15:45.155 [INFO][4272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:45.187565 env[1837]: 2024-12-13 14:15:45.155 [INFO][4272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:45.187565 env[1837]: 2024-12-13 14:15:45.171 [WARNING][4272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" HandleID="k8s-pod-network.02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Workload="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:45.187565 env[1837]: 2024-12-13 14:15:45.171 [INFO][4272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" HandleID="k8s-pod-network.02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Workload="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:45.187565 env[1837]: 2024-12-13 14:15:45.174 [INFO][4272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:45.187565 env[1837]: 2024-12-13 14:15:45.181 [INFO][4264] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Dec 13 14:15:45.197462 systemd[1]: run-netns-cni\x2dfaebb0dd\x2d95d7\x2d3fe2\x2d9456\x2da4671ab856d7.mount: Deactivated successfully. Dec 13 14:15:45.201379 env[1837]: time="2024-12-13T14:15:45.201301660Z" level=info msg="TearDown network for sandbox \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\" successfully" Dec 13 14:15:45.201551 env[1837]: time="2024-12-13T14:15:45.201369788Z" level=info msg="StopPodSandbox for \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\" returns successfully" Dec 13 14:15:45.202962 env[1837]: time="2024-12-13T14:15:45.202843947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54654bf745-9fq4l,Uid:649c8cd1-1016-49a5-85ac-f55023619db6,Namespace:calico-system,Attempt:1,}" Dec 13 14:15:45.238000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.238000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.238000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.238000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.238000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.238000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.238000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.238000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.238000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.238000 audit: BPF prog-id=13 op=LOAD Dec 13 14:15:45.238000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcf978638 a2=40 a3=ffffcf978668 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.238000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.240000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:15:45.240000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.240000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffcf978750 a2=50 a3=0 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.240000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.257000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.257000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcf9786a8 a2=28 a3=ffffcf9787d8 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.257000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.258000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.258000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcf9786d8 a2=28 a3=ffffcf978808 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.258000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.258000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.258000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcf978588 a2=28 a3=ffffcf9786b8 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.258000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.258000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.258000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcf9786f8 a2=28 a3=ffffcf978828 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.258000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.259000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.259000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcf9786d8 a2=28 a3=ffffcf978808 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.259000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.259000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.259000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcf9786c8 a2=28 a3=ffffcf9787f8 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.259000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.260000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.260000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcf9786f8 a2=28 a3=ffffcf978828 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.260000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.260000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.260000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcf9786d8 a2=28 a3=ffffcf978808 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.260000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.260000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.260000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcf9786f8 a2=28 a3=ffffcf978828 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.260000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.261000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.261000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcf9786c8 a2=28 a3=ffffcf9787f8 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.261000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.261000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.261000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcf978748 a2=28 a3=ffffcf978888 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.261000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.262000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.262000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcf978480 a2=50 a3=0 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.262000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.262000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.262000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.262000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.262000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.262000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.262000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.262000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.262000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.262000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.262000 audit: BPF prog-id=14 op=LOAD Dec 13 14:15:45.262000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcf978488 a2=94 a3=5 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.262000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.264000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:15:45.264000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.264000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcf978590 a2=50 a3=0 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.264000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.264000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.264000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffcf9786d8 a2=4 a3=3 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.264000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.265000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.265000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.265000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.265000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.265000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.265000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.265000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.265000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.265000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.265000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.265000 audit[4250]: AVC avc: denied { confidentiality } for pid=4250 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:15:45.265000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcf9786b8 a2=94 a3=6 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.265000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.274000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.274000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.274000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.274000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.274000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.274000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.274000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.274000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.274000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.274000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.274000 audit[4250]: AVC avc: denied { confidentiality } for pid=4250 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:15:45.274000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcf977e88 a2=94 a3=83 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.274000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.276000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.276000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.276000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.276000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.276000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.276000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.276000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.276000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.276000 audit[4250]: AVC avc: denied { perfmon } for pid=4250 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.276000 audit[4250]: AVC avc: denied { bpf } for pid=4250 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.276000 audit[4250]: AVC avc: denied { confidentiality } for pid=4250 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:15:45.276000 audit[4250]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcf977e88 a2=94 a3=83 items=0 ppid=4142 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.276000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:15:45.305000 audit[4281]: AVC avc: denied { bpf } for pid=4281 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.305000 audit[4281]: AVC avc: denied { bpf } for pid=4281 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.305000 audit[4281]: AVC avc: denied { perfmon } for pid=4281 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.305000 audit[4281]: AVC avc: denied { perfmon } for pid=4281 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.305000 audit[4281]: AVC avc: denied { perfmon } for pid=4281 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.305000 audit[4281]: AVC avc: denied { perfmon } for pid=4281 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.305000 audit[4281]: AVC avc: denied { perfmon } for pid=4281 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.305000 audit[4281]: AVC avc: denied { bpf } for pid=4281 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.305000 audit[4281]: AVC avc: denied { bpf } for pid=4281 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.305000 audit: BPF prog-id=15 op=LOAD Dec 13 14:15:45.305000 audit[4281]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd15b3308 a2=98 a3=ffffd15b32f8 items=0 ppid=4142 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.305000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:15:45.306000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { bpf } for pid=4281 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { bpf } for pid=4281 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { perfmon } for pid=4281 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { perfmon } for pid=4281 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { perfmon } for pid=4281 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { perfmon } for pid=4281 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { perfmon } for pid=4281 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { bpf } for pid=4281 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { bpf } for pid=4281 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit: BPF prog-id=16 op=LOAD Dec 13 14:15:45.306000 audit[4281]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd15b31b8 a2=74 a3=95 items=0 ppid=4142 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.306000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:15:45.306000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { bpf } for pid=4281 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { bpf } for pid=4281 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { perfmon } for pid=4281 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { perfmon } for pid=4281 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { perfmon } for pid=4281 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { perfmon } for pid=4281 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { perfmon } for pid=4281 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { bpf } for pid=4281 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit[4281]: AVC avc: denied { bpf } for pid=4281 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.306000 audit: BPF prog-id=17 op=LOAD Dec 13 14:15:45.306000 audit[4281]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd15b31e8 a2=40 a3=ffffd15b3218 items=0 ppid=4142 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.306000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:15:45.306000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:15:45.507599 systemd-networkd[1507]: vxlan.calico: Link UP Dec 13 14:15:45.507613 systemd-networkd[1507]: vxlan.calico: Gained carrier Dec 13 14:15:45.512110 (udev-worker)[4314]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:45.569000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.569000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.569000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.569000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.569000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.569000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.569000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.569000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.569000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.569000 audit: BPF prog-id=18 op=LOAD Dec 13 14:15:45.569000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc25d8338 a2=98 a3=ffffc25d8328 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.569000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.570000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit: BPF prog-id=19 op=LOAD Dec 13 14:15:45.570000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc25d8018 a2=74 a3=95 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.570000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.570000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.570000 audit: BPF prog-id=20 op=LOAD Dec 13 14:15:45.570000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc25d8078 a2=94 a3=2 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.570000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.571000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:15:45.571000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.571000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc25d80a8 a2=28 a3=ffffc25d81d8 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.571000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.571000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.571000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc25d80d8 a2=28 a3=ffffc25d8208 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.571000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.571000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.571000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc25d7f88 a2=28 a3=ffffc25d80b8 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.571000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.571000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.571000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc25d80f8 a2=28 a3=ffffc25d8228 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.571000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc25d80d8 a2=28 a3=ffffc25d8208 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.573000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc25d80c8 a2=28 a3=ffffc25d81f8 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.573000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc25d80f8 a2=28 a3=ffffc25d8228 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.573000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc25d80d8 a2=28 a3=ffffc25d8208 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.573000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc25d80f8 a2=28 a3=ffffc25d8228 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.573000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc25d80c8 a2=28 a3=ffffc25d81f8 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.573000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc25d8148 a2=28 a3=ffffc25d8288 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.573000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.573000 audit: BPF prog-id=21 op=LOAD Dec 13 14:15:45.573000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc25d7f68 a2=40 a3=ffffc25d7f98 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.573000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.574000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:15:45.574000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.574000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffc25d7f90 a2=50 a3=0 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.574000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.578000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.578000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffc25d7f90 a2=50 a3=0 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.578000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.578000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.578000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.578000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.578000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.578000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.578000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.578000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.578000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.578000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.578000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.578000 audit: BPF prog-id=22 op=LOAD Dec 13 14:15:45.578000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc25d76f8 a2=94 a3=2 items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.578000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.581000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:15:45.581000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.581000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.581000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.581000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.581000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.581000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.581000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.581000 audit[4325]: AVC avc: denied { perfmon } for pid=4325 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.581000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.581000 audit[4325]: AVC avc: denied { bpf } for pid=4325 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.581000 audit: BPF prog-id=23 op=LOAD Dec 13 14:15:45.581000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc25d7888 a2=94 a3=2d items=0 ppid=4142 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.581000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:15:45.580300 (udev-worker)[4107]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit: BPF prog-id=24 op=LOAD Dec 13 14:15:45.599000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcb817368 a2=98 a3=ffffcb817358 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.599000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.599000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit: BPF prog-id=25 op=LOAD Dec 13 14:15:45.599000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcb816ff8 a2=74 a3=95 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.599000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.599000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.599000 audit: BPF prog-id=26 op=LOAD Dec 13 14:15:45.599000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcb817058 a2=94 a3=2 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.599000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.599000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:15:45.769140 systemd-networkd[1507]: cali008f78e0953: Link UP Dec 13 14:15:45.776779 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali008f78e0953: link becomes ready Dec 13 14:15:45.776295 systemd-networkd[1507]: cali008f78e0953: Gained carrier Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.413 [INFO][4282] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0 calico-kube-controllers-54654bf745- calico-system 649c8cd1-1016-49a5-85ac-f55023619db6 760 0 2024-12-13 14:15:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54654bf745 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-163 calico-kube-controllers-54654bf745-9fq4l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali008f78e0953 [] []}} ContainerID="7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" Namespace="calico-system" Pod="calico-kube-controllers-54654bf745-9fq4l" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-" Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.414 [INFO][4282] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" Namespace="calico-system" Pod="calico-kube-controllers-54654bf745-9fq4l" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.622 [INFO][4307] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" HandleID="k8s-pod-network.7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" Workload="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.654 [INFO][4307] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" HandleID="k8s-pod-network.7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" Workload="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000429ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-163", "pod":"calico-kube-controllers-54654bf745-9fq4l", "timestamp":"2024-12-13 14:15:45.622322525 +0000 UTC"}, Hostname:"ip-172-31-26-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.654 [INFO][4307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.654 [INFO][4307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.654 [INFO][4307] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-163' Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.658 [INFO][4307] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" host="ip-172-31-26-163" Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.668 [INFO][4307] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-163" Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.680 [INFO][4307] ipam/ipam.go 489: Trying affinity for 192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.689 [INFO][4307] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.694 [INFO][4307] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.695 [INFO][4307] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" host="ip-172-31-26-163" Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.698 [INFO][4307] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050 Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.715 [INFO][4307] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" host="ip-172-31-26-163" Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.725 [INFO][4307] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.129/26] block=192.168.120.128/26 handle="k8s-pod-network.7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" host="ip-172-31-26-163" Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.726 [INFO][4307] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.129/26] handle="k8s-pod-network.7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" host="ip-172-31-26-163" Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.726 [INFO][4307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:45.813568 env[1837]: 2024-12-13 14:15:45.726 [INFO][4307] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.129/26] IPv6=[] ContainerID="7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" HandleID="k8s-pod-network.7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" Workload="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:45.814937 env[1837]: 2024-12-13 14:15:45.730 [INFO][4282] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" Namespace="calico-system" Pod="calico-kube-controllers-54654bf745-9fq4l" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0", GenerateName:"calico-kube-controllers-54654bf745-", Namespace:"calico-system", SelfLink:"", UID:"649c8cd1-1016-49a5-85ac-f55023619db6", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54654bf745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"", Pod:"calico-kube-controllers-54654bf745-9fq4l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali008f78e0953", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:45.814937 env[1837]: 2024-12-13 14:15:45.730 [INFO][4282] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.129/32] ContainerID="7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" Namespace="calico-system" Pod="calico-kube-controllers-54654bf745-9fq4l" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:45.814937 env[1837]: 2024-12-13 14:15:45.730 [INFO][4282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali008f78e0953 ContainerID="7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" Namespace="calico-system" Pod="calico-kube-controllers-54654bf745-9fq4l" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:45.814937 env[1837]: 2024-12-13 14:15:45.780 [INFO][4282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" Namespace="calico-system" Pod="calico-kube-controllers-54654bf745-9fq4l" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:45.814937 env[1837]: 2024-12-13 14:15:45.782 [INFO][4282] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" Namespace="calico-system" Pod="calico-kube-controllers-54654bf745-9fq4l" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0", GenerateName:"calico-kube-controllers-54654bf745-", Namespace:"calico-system", SelfLink:"", UID:"649c8cd1-1016-49a5-85ac-f55023619db6", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54654bf745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050", Pod:"calico-kube-controllers-54654bf745-9fq4l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali008f78e0953", MAC:"a6:cc:44:b9:c6:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:45.814937 env[1837]: 2024-12-13 14:15:45.804 [INFO][4282] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050" Namespace="calico-system" Pod="calico-kube-controllers-54654bf745-9fq4l" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:45.868267 env[1837]: time="2024-12-13T14:15:45.868126148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:45.868267 env[1837]: time="2024-12-13T14:15:45.868219258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:45.868581 env[1837]: time="2024-12-13T14:15:45.868246591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:45.869413 env[1837]: time="2024-12-13T14:15:45.868900879Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050 pid=4354 runtime=io.containerd.runc.v2 Dec 13 14:15:45.900843 env[1837]: time="2024-12-13T14:15:45.900763043Z" level=info msg="StopPodSandbox for \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\"" Dec 13 14:15:45.949000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.949000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.949000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.949000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.949000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.949000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.949000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.949000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.949000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.949000 audit: BPF prog-id=27 op=LOAD Dec 13 14:15:45.949000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcb817018 a2=40 a3=ffffcb817048 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.949000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.949000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:15:45.949000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.949000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffcb817130 a2=50 a3=0 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.949000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.972000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.972000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcb817088 a2=28 a3=ffffcb8171b8 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.972000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.972000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcb8170b8 a2=28 a3=ffffcb8171e8 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.972000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.972000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcb816f68 a2=28 a3=ffffcb817098 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.972000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.972000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcb8170d8 a2=28 a3=ffffcb817208 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.972000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.972000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcb8170b8 a2=28 a3=ffffcb8171e8 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.972000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.972000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcb8170a8 a2=28 a3=ffffcb8171d8 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.972000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.972000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcb8170d8 a2=28 a3=ffffcb817208 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.972000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.972000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcb8170b8 a2=28 a3=ffffcb8171e8 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.972000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.972000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcb8170d8 a2=28 a3=ffffcb817208 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.972000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.972000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcb8170a8 a2=28 a3=ffffcb8171d8 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcb817128 a2=28 a3=ffffcb817268 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcb816e60 a2=50 a3=0 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit: BPF prog-id=28 op=LOAD Dec 13 14:15:45.973000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcb816e68 a2=94 a3=5 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.973000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcb816f70 a2=50 a3=0 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffcb8170b8 a2=4 a3=3 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { confidentiality } for pid=4329 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:15:45.973000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcb817098 a2=94 a3=6 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.973000 audit[4329]: AVC avc: denied { confidentiality } for pid=4329 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:15:45.973000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcb816868 a2=94 a3=83 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.974000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.974000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.974000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.974000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.974000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.974000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.974000 audit[4329]: AVC avc: denied { perfmon } for pid=4329 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.974000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcb816868 a2=94 a3=83 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.974000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.976000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.976000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcb8182a8 a2=10 a3=ffffcb818398 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.976000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.977000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.977000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcb818168 a2=10 a3=ffffcb818258 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.977000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.977000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.977000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcb8180d8 a2=10 a3=ffffcb818258 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.977000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.977000 audit[4329]: AVC avc: denied { bpf } for pid=4329 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:15:45.977000 audit[4329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcb8180d8 a2=10 a3=ffffcb818258 items=0 ppid=4142 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:45.977000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:15:45.997000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:15:46.119696 kubelet[2999]: I1213 14:15:46.118952 2999 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:15:46.126686 env[1837]: time="2024-12-13T14:15:46.124552151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54654bf745-9fq4l,Uid:649c8cd1-1016-49a5-85ac-f55023619db6,Namespace:calico-system,Attempt:1,} returns sandbox id \"7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050\"" Dec 13 14:15:46.135055 env[1837]: time="2024-12-13T14:15:46.134976894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 14:15:46.199350 systemd[1]: run-containerd-runc-k8s.io-7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050-runc.KPGo0b.mount: Deactivated successfully. Dec 13 14:15:46.277000 audit[4448]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=4448 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:15:46.277000 audit[4448]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffd5b03130 a2=0 a3=ffffa38bdfa8 items=0 ppid=4142 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:46.277000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:15:46.309859 env[1837]: 2024-12-13 14:15:46.170 [INFO][4397] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Dec 13 14:15:46.309859 env[1837]: 2024-12-13 14:15:46.170 [INFO][4397] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" iface="eth0" netns="/var/run/netns/cni-63607b4a-1d42-cf65-23b4-719566895d82" Dec 13 14:15:46.309859 env[1837]: 2024-12-13 14:15:46.170 [INFO][4397] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" iface="eth0" netns="/var/run/netns/cni-63607b4a-1d42-cf65-23b4-719566895d82" Dec 13 14:15:46.309859 env[1837]: 2024-12-13 14:15:46.171 [INFO][4397] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" iface="eth0" netns="/var/run/netns/cni-63607b4a-1d42-cf65-23b4-719566895d82" Dec 13 14:15:46.309859 env[1837]: 2024-12-13 14:15:46.171 [INFO][4397] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Dec 13 14:15:46.309859 env[1837]: 2024-12-13 14:15:46.171 [INFO][4397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Dec 13 14:15:46.309859 env[1837]: 2024-12-13 14:15:46.282 [INFO][4427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" HandleID="k8s-pod-network.92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:46.309859 env[1837]: 2024-12-13 14:15:46.283 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:46.309859 env[1837]: 2024-12-13 14:15:46.283 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:46.309859 env[1837]: 2024-12-13 14:15:46.297 [WARNING][4427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" HandleID="k8s-pod-network.92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:46.309859 env[1837]: 2024-12-13 14:15:46.297 [INFO][4427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" HandleID="k8s-pod-network.92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:46.309859 env[1837]: 2024-12-13 14:15:46.300 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:46.309859 env[1837]: 2024-12-13 14:15:46.303 [INFO][4397] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Dec 13 14:15:46.313834 env[1837]: time="2024-12-13T14:15:46.313768680Z" level=info msg="TearDown network for sandbox \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\" successfully" Dec 13 14:15:46.314043 env[1837]: time="2024-12-13T14:15:46.314001960Z" level=info msg="StopPodSandbox for \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\" returns successfully" Dec 13 14:15:46.316385 systemd[1]: run-netns-cni\x2d63607b4a\x2d1d42\x2dcf65\x2d23b4\x2d719566895d82.mount: Deactivated successfully. Dec 13 14:15:46.317673 env[1837]: time="2024-12-13T14:15:46.316820974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rbxmj,Uid:d999231e-24a7-47cf-8eea-96857833ff01,Namespace:kube-system,Attempt:1,}" Dec 13 14:15:46.347000 audit[4454]: NETFILTER_CFG table=filter:98 family=2 entries=39 op=nft_register_chain pid=4454 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:15:46.347000 audit[4454]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=ffffd2805630 a2=0 a3=ffff8adbffa8 items=0 ppid=4142 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:46.347000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:15:46.350000 audit[4450]: NETFILTER_CFG table=raw:99 family=2 entries=21 op=nft_register_chain pid=4450 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:15:46.350000 audit[4450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffeaab2930 a2=0 a3=ffff84458fa8 items=0 ppid=4142 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:46.350000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:15:46.369000 audit[4451]: NETFILTER_CFG table=nat:100 family=2 entries=15 op=nft_register_chain pid=4451 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:15:46.369000 audit[4451]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffff8bf800 a2=0 a3=ffffb9ccdfa8 items=0 ppid=4142 pid=4451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:46.369000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:15:46.513000 audit[4484]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=4484 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:15:46.513000 audit[4484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=ffffd8193930 a2=0 a3=ffffafdbcfa8 items=0 ppid=4142 pid=4484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:46.513000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:15:46.648522 systemd-networkd[1507]: vxlan.calico: Gained IPv6LL Dec 13 14:15:46.703041 systemd-networkd[1507]: cali26f07bd83c5: Link UP Dec 13 14:15:46.709038 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:15:46.709177 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali26f07bd83c5: link becomes ready Dec 13 14:15:46.715573 systemd-networkd[1507]: cali26f07bd83c5: Gained carrier Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.462 [INFO][4461] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0 coredns-76f75df574- kube-system d999231e-24a7-47cf-8eea-96857833ff01 768 0 2024-12-13 14:15:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-163 coredns-76f75df574-rbxmj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali26f07bd83c5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" Namespace="kube-system" Pod="coredns-76f75df574-rbxmj" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-" Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.463 [INFO][4461] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" Namespace="kube-system" Pod="coredns-76f75df574-rbxmj" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.585 [INFO][4480] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" HandleID="k8s-pod-network.c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.624 [INFO][4480] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" HandleID="k8s-pod-network.c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028dbb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-163", "pod":"coredns-76f75df574-rbxmj", "timestamp":"2024-12-13 14:15:46.585758156 +0000 UTC"}, Hostname:"ip-172-31-26-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.624 [INFO][4480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.624 [INFO][4480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.624 [INFO][4480] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-163' Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.628 [INFO][4480] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" host="ip-172-31-26-163" Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.636 [INFO][4480] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-163" Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.645 [INFO][4480] ipam/ipam.go 489: Trying affinity for 192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.650 [INFO][4480] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.654 [INFO][4480] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.654 [INFO][4480] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" host="ip-172-31-26-163" Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.656 [INFO][4480] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427 Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.665 [INFO][4480] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" host="ip-172-31-26-163" Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.681 [INFO][4480] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.130/26] block=192.168.120.128/26 handle="k8s-pod-network.c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" host="ip-172-31-26-163" Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.681 [INFO][4480] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.130/26] handle="k8s-pod-network.c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" host="ip-172-31-26-163" Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.681 [INFO][4480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:46.751403 env[1837]: 2024-12-13 14:15:46.681 [INFO][4480] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.130/26] IPv6=[] ContainerID="c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" HandleID="k8s-pod-network.c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:46.753582 env[1837]: 2024-12-13 14:15:46.691 [INFO][4461] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" Namespace="kube-system" Pod="coredns-76f75df574-rbxmj" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d999231e-24a7-47cf-8eea-96857833ff01", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"", Pod:"coredns-76f75df574-rbxmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26f07bd83c5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:46.753582 env[1837]: 2024-12-13 14:15:46.692 [INFO][4461] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.130/32] ContainerID="c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" Namespace="kube-system" Pod="coredns-76f75df574-rbxmj" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:46.753582 env[1837]: 2024-12-13 14:15:46.692 [INFO][4461] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26f07bd83c5 ContainerID="c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" Namespace="kube-system" Pod="coredns-76f75df574-rbxmj" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:46.753582 env[1837]: 2024-12-13 14:15:46.716 [INFO][4461] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" Namespace="kube-system" Pod="coredns-76f75df574-rbxmj" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:46.753582 env[1837]: 2024-12-13 14:15:46.717 [INFO][4461] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" Namespace="kube-system" Pod="coredns-76f75df574-rbxmj" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d999231e-24a7-47cf-8eea-96857833ff01", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427", Pod:"coredns-76f75df574-rbxmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26f07bd83c5", MAC:"96:43:ff:b8:03:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:46.753582 env[1837]: 2024-12-13 14:15:46.741 [INFO][4461] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427" Namespace="kube-system" Pod="coredns-76f75df574-rbxmj" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:46.792000 audit[4522]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=4522 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:15:46.804552 env[1837]: time="2024-12-13T14:15:46.803211465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:46.804884 env[1837]: time="2024-12-13T14:15:46.804529997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:46.804884 env[1837]: time="2024-12-13T14:15:46.804573601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:46.805382 env[1837]: time="2024-12-13T14:15:46.805239280Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427 pid=4530 runtime=io.containerd.runc.v2 Dec 13 14:15:46.792000 audit[4522]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20336 a0=3 a1=ffffc6954ba0 a2=0 a3=ffff88069fa8 items=0 ppid=4142 pid=4522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:46.792000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:15:46.896424 env[1837]: time="2024-12-13T14:15:46.895178729Z" level=info msg="StopPodSandbox for \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\"" Dec 13 14:15:46.903259 env[1837]: time="2024-12-13T14:15:46.899876870Z" level=info msg="StopPodSandbox for \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\"" Dec 13 14:15:46.956968 env[1837]: time="2024-12-13T14:15:46.956884950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rbxmj,Uid:d999231e-24a7-47cf-8eea-96857833ff01,Namespace:kube-system,Attempt:1,} returns sandbox id \"c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427\"" Dec 13 14:15:46.964090 env[1837]: time="2024-12-13T14:15:46.964017377Z" level=info msg="CreateContainer within sandbox \"c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:15:46.991707 env[1837]: time="2024-12-13T14:15:46.991605083Z" level=info msg="CreateContainer within sandbox \"c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"091b0a9693bad5e382aedc4bc6e36de9391a26c3cd20c81b89ab70d95128eb78\"" Dec 13 14:15:46.994903 env[1837]: time="2024-12-13T14:15:46.994701580Z" level=info msg="StartContainer for \"091b0a9693bad5e382aedc4bc6e36de9391a26c3cd20c81b89ab70d95128eb78\"" Dec 13 14:15:47.274324 env[1837]: time="2024-12-13T14:15:47.274143758Z" level=info msg="StartContainer for \"091b0a9693bad5e382aedc4bc6e36de9391a26c3cd20c81b89ab70d95128eb78\" returns successfully" Dec 13 14:15:47.287874 systemd-networkd[1507]: cali008f78e0953: Gained IPv6LL Dec 13 14:15:47.347672 env[1837]: 2024-12-13 14:15:47.113 [INFO][4597] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Dec 13 14:15:47.347672 env[1837]: 2024-12-13 14:15:47.113 [INFO][4597] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" iface="eth0" netns="/var/run/netns/cni-eef444e5-7934-4d2d-2d71-1473ef9a2131" Dec 13 14:15:47.347672 env[1837]: 2024-12-13 14:15:47.114 [INFO][4597] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" iface="eth0" netns="/var/run/netns/cni-eef444e5-7934-4d2d-2d71-1473ef9a2131" Dec 13 14:15:47.347672 env[1837]: 2024-12-13 14:15:47.118 [INFO][4597] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" iface="eth0" netns="/var/run/netns/cni-eef444e5-7934-4d2d-2d71-1473ef9a2131" Dec 13 14:15:47.347672 env[1837]: 2024-12-13 14:15:47.118 [INFO][4597] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Dec 13 14:15:47.347672 env[1837]: 2024-12-13 14:15:47.118 [INFO][4597] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Dec 13 14:15:47.347672 env[1837]: 2024-12-13 14:15:47.297 [INFO][4631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" HandleID="k8s-pod-network.b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:47.347672 env[1837]: 2024-12-13 14:15:47.297 [INFO][4631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:47.347672 env[1837]: 2024-12-13 14:15:47.298 [INFO][4631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:47.347672 env[1837]: 2024-12-13 14:15:47.321 [WARNING][4631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" HandleID="k8s-pod-network.b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:47.347672 env[1837]: 2024-12-13 14:15:47.322 [INFO][4631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" HandleID="k8s-pod-network.b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:47.347672 env[1837]: 2024-12-13 14:15:47.337 [INFO][4631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:47.347672 env[1837]: 2024-12-13 14:15:47.341 [INFO][4597] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Dec 13 14:15:47.351899 env[1837]: time="2024-12-13T14:15:47.351829277Z" level=info msg="TearDown network for sandbox \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\" successfully" Dec 13 14:15:47.352136 env[1837]: time="2024-12-13T14:15:47.352100931Z" level=info msg="StopPodSandbox for \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\" returns successfully" Dec 13 14:15:47.356303 env[1837]: time="2024-12-13T14:15:47.356228087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bffc5dd46-2qrs2,Uid:ef634f66-b7a3-4b1f-99b3-8db2e225f26a,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:15:47.359821 systemd[1]: run-netns-cni\x2deef444e5\x2d7934\x2d4d2d\x2d2d71\x2d1473ef9a2131.mount: Deactivated successfully. Dec 13 14:15:47.390581 env[1837]: 2024-12-13 14:15:47.088 [INFO][4586] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Dec 13 14:15:47.390581 env[1837]: 2024-12-13 14:15:47.088 [INFO][4586] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" iface="eth0" netns="/var/run/netns/cni-21c4a06b-42d6-0670-d8ad-79a78052a665" Dec 13 14:15:47.390581 env[1837]: 2024-12-13 14:15:47.091 [INFO][4586] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" iface="eth0" netns="/var/run/netns/cni-21c4a06b-42d6-0670-d8ad-79a78052a665" Dec 13 14:15:47.390581 env[1837]: 2024-12-13 14:15:47.100 [INFO][4586] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" iface="eth0" netns="/var/run/netns/cni-21c4a06b-42d6-0670-d8ad-79a78052a665" Dec 13 14:15:47.390581 env[1837]: 2024-12-13 14:15:47.101 [INFO][4586] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Dec 13 14:15:47.390581 env[1837]: 2024-12-13 14:15:47.101 [INFO][4586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Dec 13 14:15:47.390581 env[1837]: 2024-12-13 14:15:47.314 [INFO][4623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" HandleID="k8s-pod-network.9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:47.390581 env[1837]: 2024-12-13 14:15:47.315 [INFO][4623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:47.390581 env[1837]: 2024-12-13 14:15:47.324 [INFO][4623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:47.390581 env[1837]: 2024-12-13 14:15:47.353 [WARNING][4623] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" HandleID="k8s-pod-network.9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:47.390581 env[1837]: 2024-12-13 14:15:47.353 [INFO][4623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" HandleID="k8s-pod-network.9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:47.390581 env[1837]: 2024-12-13 14:15:47.371 [INFO][4623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:47.390581 env[1837]: 2024-12-13 14:15:47.382 [INFO][4586] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Dec 13 14:15:47.391891 env[1837]: time="2024-12-13T14:15:47.391835084Z" level=info msg="TearDown network for sandbox \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\" successfully" Dec 13 14:15:47.392102 env[1837]: time="2024-12-13T14:15:47.392064634Z" level=info msg="StopPodSandbox for \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\" returns successfully" Dec 13 14:15:47.402477 systemd[1]: run-netns-cni\x2d21c4a06b\x2d42d6\x2d0670\x2dd8ad\x2d79a78052a665.mount: Deactivated successfully. Dec 13 14:15:47.415451 env[1837]: time="2024-12-13T14:15:47.415371154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nxsxq,Uid:d93f59e9-cea4-4e42-99d4-3d89f412196e,Namespace:kube-system,Attempt:1,}" Dec 13 14:15:47.903806 systemd-networkd[1507]: cali34aa87130ca: Link UP Dec 13 14:15:47.911654 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:15:47.911814 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali34aa87130ca: link becomes ready Dec 13 14:15:47.912069 systemd-networkd[1507]: cali34aa87130ca: Gained carrier Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.504 [INFO][4655] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0 calico-apiserver-bffc5dd46- calico-apiserver ef634f66-b7a3-4b1f-99b3-8db2e225f26a 780 0 2024-12-13 14:15:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bffc5dd46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-163 calico-apiserver-bffc5dd46-2qrs2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali34aa87130ca [] []}} ContainerID="d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-2qrs2" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-" Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.504 [INFO][4655] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-2qrs2" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.650 [INFO][4676] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" HandleID="k8s-pod-network.d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.683 [INFO][4676] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" HandleID="k8s-pod-network.d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000360910), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-163", "pod":"calico-apiserver-bffc5dd46-2qrs2", "timestamp":"2024-12-13 14:15:47.650285384 +0000 UTC"}, Hostname:"ip-172-31-26-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.684 [INFO][4676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.686 [INFO][4676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.690 [INFO][4676] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-163' Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.693 [INFO][4676] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" host="ip-172-31-26-163" Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.708 [INFO][4676] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-163" Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.719 [INFO][4676] ipam/ipam.go 489: Trying affinity for 192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.725 [INFO][4676] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.730 [INFO][4676] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.730 [INFO][4676] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" host="ip-172-31-26-163" Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.753 [INFO][4676] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.819 [INFO][4676] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" host="ip-172-31-26-163" Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.881 [INFO][4676] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.131/26] block=192.168.120.128/26 handle="k8s-pod-network.d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" host="ip-172-31-26-163" Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.881 [INFO][4676] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.131/26] handle="k8s-pod-network.d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" host="ip-172-31-26-163" Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.881 [INFO][4676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:48.045157 env[1837]: 2024-12-13 14:15:47.881 [INFO][4676] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.131/26] IPv6=[] ContainerID="d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" HandleID="k8s-pod-network.d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:48.046561 env[1837]: 2024-12-13 14:15:47.891 [INFO][4655] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-2qrs2" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0", GenerateName:"calico-apiserver-bffc5dd46-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef634f66-b7a3-4b1f-99b3-8db2e225f26a", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bffc5dd46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"", Pod:"calico-apiserver-bffc5dd46-2qrs2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34aa87130ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:48.046561 env[1837]: 2024-12-13 14:15:47.891 [INFO][4655] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.131/32] ContainerID="d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-2qrs2" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:48.046561 env[1837]: 2024-12-13 14:15:47.891 [INFO][4655] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34aa87130ca ContainerID="d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-2qrs2" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:48.046561 env[1837]: 2024-12-13 14:15:47.940 [INFO][4655] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-2qrs2" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:48.046561 env[1837]: 2024-12-13 14:15:47.941 [INFO][4655] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-2qrs2" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0", GenerateName:"calico-apiserver-bffc5dd46-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef634f66-b7a3-4b1f-99b3-8db2e225f26a", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bffc5dd46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e", Pod:"calico-apiserver-bffc5dd46-2qrs2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34aa87130ca", MAC:"f2:52:b4:40:90:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:48.046561 env[1837]: 2024-12-13 14:15:48.033 [INFO][4655] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-2qrs2" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:48.218053 systemd-networkd[1507]: cali46c5baa8baa: Link UP Dec 13 14:15:48.223337 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali46c5baa8baa: link becomes ready Dec 13 14:15:48.222801 systemd-networkd[1507]: cali46c5baa8baa: Gained carrier Dec 13 14:15:48.286232 kubelet[2999]: I1213 14:15:48.285669 2999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rbxmj" podStartSLOduration=38.285582717 podStartE2EDuration="38.285582717s" podCreationTimestamp="2024-12-13 14:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:15:48.277884697 +0000 UTC m=+50.782490537" watchObservedRunningTime="2024-12-13 14:15:48.285582717 +0000 UTC m=+50.790188497" Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:47.688 [INFO][4666] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0 coredns-76f75df574- kube-system d93f59e9-cea4-4e42-99d4-3d89f412196e 779 0 2024-12-13 14:15:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-163 coredns-76f75df574-nxsxq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali46c5baa8baa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" Namespace="kube-system" Pod="coredns-76f75df574-nxsxq" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-" Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:47.688 [INFO][4666] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" Namespace="kube-system" Pod="coredns-76f75df574-nxsxq" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:47.998 [INFO][4683] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" HandleID="k8s-pod-network.02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.077 [INFO][4683] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" HandleID="k8s-pod-network.02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003076c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-163", "pod":"coredns-76f75df574-nxsxq", "timestamp":"2024-12-13 14:15:47.998145831 +0000 UTC"}, Hostname:"ip-172-31-26-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.077 [INFO][4683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.077 [INFO][4683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.078 [INFO][4683] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-163' Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.087 [INFO][4683] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" host="ip-172-31-26-163" Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.120 [INFO][4683] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-163" Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.141 [INFO][4683] ipam/ipam.go 489: Trying affinity for 192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.146 [INFO][4683] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.154 [INFO][4683] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.154 [INFO][4683] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" host="ip-172-31-26-163" Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.161 [INFO][4683] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6 Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.173 [INFO][4683] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" host="ip-172-31-26-163" Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.185 [INFO][4683] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.132/26] block=192.168.120.128/26 handle="k8s-pod-network.02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" host="ip-172-31-26-163" Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.185 [INFO][4683] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.132/26] handle="k8s-pod-network.02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" host="ip-172-31-26-163" Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.185 [INFO][4683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:48.302593 env[1837]: 2024-12-13 14:15:48.185 [INFO][4683] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.132/26] IPv6=[] ContainerID="02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" HandleID="k8s-pod-network.02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:48.304812 env[1837]: 2024-12-13 14:15:48.206 [INFO][4666] cni-plugin/k8s.go 386: Populated endpoint ContainerID="02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" Namespace="kube-system" Pod="coredns-76f75df574-nxsxq" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d93f59e9-cea4-4e42-99d4-3d89f412196e", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"", Pod:"coredns-76f75df574-nxsxq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c5baa8baa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:48.304812 env[1837]: 2024-12-13 14:15:48.206 [INFO][4666] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.132/32] ContainerID="02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" Namespace="kube-system" Pod="coredns-76f75df574-nxsxq" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:48.304812 env[1837]: 2024-12-13 14:15:48.208 [INFO][4666] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46c5baa8baa ContainerID="02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" Namespace="kube-system" Pod="coredns-76f75df574-nxsxq" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:48.304812 env[1837]: 2024-12-13 14:15:48.223 [INFO][4666] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" Namespace="kube-system" Pod="coredns-76f75df574-nxsxq" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:48.304812 env[1837]: 2024-12-13 14:15:48.225 [INFO][4666] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" Namespace="kube-system" Pod="coredns-76f75df574-nxsxq" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d93f59e9-cea4-4e42-99d4-3d89f412196e", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6", Pod:"coredns-76f75df574-nxsxq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c5baa8baa", MAC:"ea:e0:5e:b8:fe:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:48.304812 env[1837]: 2024-12-13 14:15:48.283 [INFO][4666] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6" Namespace="kube-system" Pod="coredns-76f75df574-nxsxq" WorkloadEndpoint="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:48.308000 audit[4719]: NETFILTER_CFG table=filter:103 family=2 entries=48 op=nft_register_chain pid=4719 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:15:48.308000 audit[4719]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25868 a0=3 a1=fffffe0e66f0 a2=0 a3=ffff910acfa8 items=0 ppid=4142 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:48.308000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:15:48.338000 audit[4725]: NETFILTER_CFG table=filter:104 family=2 entries=16 op=nft_register_rule pid=4725 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:48.338000 audit[4725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffec0d2c10 a2=0 a3=1 items=0 ppid=3178 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:48.338000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:48.346962 env[1837]: time="2024-12-13T14:15:48.346344638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:48.346000 audit[4725]: NETFILTER_CFG table=nat:105 family=2 entries=14 op=nft_register_rule pid=4725 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:48.346000 audit[4725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffec0d2c10 a2=0 a3=1 items=0 ppid=3178 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:48.346000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:48.360479 env[1837]: time="2024-12-13T14:15:48.346602411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:48.360479 env[1837]: time="2024-12-13T14:15:48.348161597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:48.360479 env[1837]: time="2024-12-13T14:15:48.350058072Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e pid=4707 runtime=io.containerd.runc.v2 Dec 13 14:15:48.390000 audit[4735]: NETFILTER_CFG table=filter:106 family=2 entries=13 op=nft_register_rule pid=4735 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:48.390000 audit[4735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffe3375a80 a2=0 a3=1 items=0 ppid=3178 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:48.390000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:48.395000 audit[4735]: NETFILTER_CFG table=nat:107 family=2 entries=35 op=nft_register_chain pid=4735 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:48.395000 audit[4735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffe3375a80 a2=0 a3=1 items=0 ppid=3178 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:48.395000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:48.404000 audit[4740]: NETFILTER_CFG table=filter:108 family=2 entries=38 op=nft_register_chain pid=4740 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:15:48.404000 audit[4740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19408 a0=3 a1=ffffeff38b00 a2=0 a3=ffff88451fa8 items=0 ppid=4142 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:48.404000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:15:48.478356 systemd[1]: run-containerd-runc-k8s.io-d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e-runc.KDEGhX.mount: Deactivated successfully. Dec 13 14:15:48.548567 env[1837]: time="2024-12-13T14:15:48.548460952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:48.548820 env[1837]: time="2024-12-13T14:15:48.548543948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:48.548963 env[1837]: time="2024-12-13T14:15:48.548899596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:48.549380 env[1837]: time="2024-12-13T14:15:48.549280861Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6 pid=4762 runtime=io.containerd.runc.v2 Dec 13 14:15:48.613495 env[1837]: time="2024-12-13T14:15:48.613422024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bffc5dd46-2qrs2,Uid:ef634f66-b7a3-4b1f-99b3-8db2e225f26a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e\"" Dec 13 14:15:48.760482 systemd-networkd[1507]: cali26f07bd83c5: Gained IPv6LL Dec 13 14:15:48.787889 env[1837]: time="2024-12-13T14:15:48.787832677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nxsxq,Uid:d93f59e9-cea4-4e42-99d4-3d89f412196e,Namespace:kube-system,Attempt:1,} returns sandbox id \"02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6\"" Dec 13 14:15:48.793773 env[1837]: time="2024-12-13T14:15:48.793717902Z" level=info msg="CreateContainer within sandbox \"02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:15:48.824475 env[1837]: time="2024-12-13T14:15:48.824360007Z" level=info msg="CreateContainer within sandbox \"02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"706aa79ebbba8b2735e6d037d22f7b76324570b1954fc0f8465d228bd675b0e3\"" Dec 13 14:15:48.844367 env[1837]: time="2024-12-13T14:15:48.844290098Z" level=info msg="StartContainer for \"706aa79ebbba8b2735e6d037d22f7b76324570b1954fc0f8465d228bd675b0e3\"" Dec 13 14:15:48.899006 env[1837]: time="2024-12-13T14:15:48.898927549Z" level=info msg="StopPodSandbox for \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\"" Dec 13 14:15:49.020448 env[1837]: time="2024-12-13T14:15:49.020003919Z" level=info msg="StartContainer for \"706aa79ebbba8b2735e6d037d22f7b76324570b1954fc0f8465d228bd675b0e3\" returns successfully" Dec 13 14:15:49.083499 systemd-networkd[1507]: cali34aa87130ca: Gained IPv6LL Dec 13 14:15:49.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.26.163:22-139.178.89.65:43152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:49.170803 systemd[1]: Started sshd@7-172.31.26.163:22-139.178.89.65:43152.service. Dec 13 14:15:49.324782 kubelet[2999]: I1213 14:15:49.316229 2999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-nxsxq" podStartSLOduration=39.316141126 podStartE2EDuration="39.316141126s" podCreationTimestamp="2024-12-13 14:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:15:49.278521034 +0000 UTC m=+51.783126790" watchObservedRunningTime="2024-12-13 14:15:49.316141126 +0000 UTC m=+51.820746918" Dec 13 14:15:49.349057 env[1837]: 2024-12-13 14:15:49.175 [INFO][4856] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Dec 13 14:15:49.349057 env[1837]: 2024-12-13 14:15:49.176 [INFO][4856] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" iface="eth0" netns="/var/run/netns/cni-afcabf9c-dce5-4024-b329-01567894cb91" Dec 13 14:15:49.349057 env[1837]: 2024-12-13 14:15:49.177 [INFO][4856] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" iface="eth0" netns="/var/run/netns/cni-afcabf9c-dce5-4024-b329-01567894cb91" Dec 13 14:15:49.349057 env[1837]: 2024-12-13 14:15:49.186 [INFO][4856] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" iface="eth0" netns="/var/run/netns/cni-afcabf9c-dce5-4024-b329-01567894cb91" Dec 13 14:15:49.349057 env[1837]: 2024-12-13 14:15:49.187 [INFO][4856] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Dec 13 14:15:49.349057 env[1837]: 2024-12-13 14:15:49.187 [INFO][4856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Dec 13 14:15:49.349057 env[1837]: 2024-12-13 14:15:49.271 [INFO][4866] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" HandleID="k8s-pod-network.5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:15:49.349057 env[1837]: 2024-12-13 14:15:49.271 [INFO][4866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:49.349057 env[1837]: 2024-12-13 14:15:49.271 [INFO][4866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:49.349057 env[1837]: 2024-12-13 14:15:49.310 [WARNING][4866] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" HandleID="k8s-pod-network.5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:15:49.349057 env[1837]: 2024-12-13 14:15:49.310 [INFO][4866] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" HandleID="k8s-pod-network.5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:15:49.349057 env[1837]: 2024-12-13 14:15:49.324 [INFO][4866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:49.349057 env[1837]: 2024-12-13 14:15:49.333 [INFO][4856] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Dec 13 14:15:49.349057 env[1837]: time="2024-12-13T14:15:49.339326677Z" level=info msg="TearDown network for sandbox \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\" successfully" Dec 13 14:15:49.349057 env[1837]: time="2024-12-13T14:15:49.339371001Z" level=info msg="StopPodSandbox for \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\" returns successfully" Dec 13 14:15:49.349057 env[1837]: time="2024-12-13T14:15:49.342323949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bffc5dd46-5tznx,Uid:29176771-bd16-429b-96c6-cf2e38be6836,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:15:49.352035 systemd[1]: run-netns-cni\x2dafcabf9c\x2ddce5\x2d4024\x2db329\x2d01567894cb91.mount: Deactivated successfully. Dec 13 14:15:49.386000 audit[4874]: NETFILTER_CFG table=filter:109 family=2 entries=10 op=nft_register_rule pid=4874 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:49.386000 audit[4874]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffffbab8d80 a2=0 a3=1 items=0 ppid=3178 pid=4874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:49.386000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:49.392000 audit[4874]: NETFILTER_CFG table=nat:110 family=2 entries=44 op=nft_register_rule pid=4874 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:49.392000 audit[4874]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffffbab8d80 a2=0 a3=1 items=0 ppid=3178 pid=4874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:49.392000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:49.452716 kernel: kauditd_printk_skb: 544 callbacks suppressed Dec 13 14:15:49.452886 kernel: audit: type=1101 audit(1734099349.440:412): pid=4865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:49.440000 audit[4865]: USER_ACCT pid=4865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:49.454984 sshd[4865]: Accepted publickey for core from 139.178.89.65 port 43152 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:49.457856 sshd[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:49.473570 kernel: audit: type=1103 audit(1734099349.455:413): pid=4865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:49.455000 audit[4865]: CRED_ACQ pid=4865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:49.482033 kernel: audit: type=1006 audit(1734099349.455:414): pid=4865 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 13 14:15:49.455000 audit[4865]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd2f02580 a2=3 a3=1 items=0 ppid=1 pid=4865 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:49.499537 kernel: audit: type=1300 audit(1734099349.455:414): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd2f02580 a2=3 a3=1 items=0 ppid=1 pid=4865 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:49.499704 kernel: audit: type=1327 audit(1734099349.455:414): proctitle=737368643A20636F7265205B707269765D Dec 13 14:15:49.455000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:15:49.496881 systemd[1]: Started session-8.scope. Dec 13 14:15:49.497638 systemd-logind[1829]: New session 8 of user core. Dec 13 14:15:49.518000 audit[4865]: USER_START pid=4865 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:49.532000 audit[4887]: CRED_ACQ pid=4887 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:49.535763 kernel: audit: type=1105 audit(1734099349.518:415): pid=4865 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:49.554715 kernel: audit: type=1103 audit(1734099349.532:416): pid=4887 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:49.784602 systemd-networkd[1507]: calidc4bcdcf13f: Link UP Dec 13 14:15:49.788663 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:15:49.789326 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calidc4bcdcf13f: link becomes ready Dec 13 14:15:49.791253 systemd-networkd[1507]: calidc4bcdcf13f: Gained carrier Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.559 [INFO][4875] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0 calico-apiserver-bffc5dd46- calico-apiserver 29176771-bd16-429b-96c6-cf2e38be6836 842 0 2024-12-13 14:15:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bffc5dd46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-163 calico-apiserver-bffc5dd46-5tznx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidc4bcdcf13f [] []}} ContainerID="0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-5tznx" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-" Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.560 [INFO][4875] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-5tznx" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.627 [INFO][4888] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" HandleID="k8s-pod-network.0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.650 [INFO][4888] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" HandleID="k8s-pod-network.0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000334c80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-163", "pod":"calico-apiserver-bffc5dd46-5tznx", "timestamp":"2024-12-13 14:15:49.627454135 +0000 UTC"}, Hostname:"ip-172-31-26-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.650 [INFO][4888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.650 [INFO][4888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.651 [INFO][4888] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-163' Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.654 [INFO][4888] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" host="ip-172-31-26-163" Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.663 [INFO][4888] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-163" Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.673 [INFO][4888] ipam/ipam.go 489: Trying affinity for 192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.677 [INFO][4888] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.681 [INFO][4888] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.682 [INFO][4888] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" host="ip-172-31-26-163" Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.687 [INFO][4888] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.694 [INFO][4888] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" host="ip-172-31-26-163" Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.714 [INFO][4888] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.133/26] block=192.168.120.128/26 handle="k8s-pod-network.0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" host="ip-172-31-26-163" Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.715 [INFO][4888] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.133/26] handle="k8s-pod-network.0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" host="ip-172-31-26-163" Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.715 [INFO][4888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:49.840328 env[1837]: 2024-12-13 14:15:49.715 [INFO][4888] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.133/26] IPv6=[] ContainerID="0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" HandleID="k8s-pod-network.0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:15:49.842895 env[1837]: 2024-12-13 14:15:49.743 [INFO][4875] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-5tznx" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0", GenerateName:"calico-apiserver-bffc5dd46-", Namespace:"calico-apiserver", SelfLink:"", UID:"29176771-bd16-429b-96c6-cf2e38be6836", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bffc5dd46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"", Pod:"calico-apiserver-bffc5dd46-5tznx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc4bcdcf13f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:49.842895 env[1837]: 2024-12-13 14:15:49.743 [INFO][4875] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.133/32] ContainerID="0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-5tznx" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:15:49.842895 env[1837]: 2024-12-13 14:15:49.743 [INFO][4875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc4bcdcf13f ContainerID="0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-5tznx" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:15:49.842895 env[1837]: 2024-12-13 14:15:49.795 [INFO][4875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-5tznx" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:15:49.842895 env[1837]: 2024-12-13 14:15:49.796 [INFO][4875] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-5tznx" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0", GenerateName:"calico-apiserver-bffc5dd46-", Namespace:"calico-apiserver", SelfLink:"", UID:"29176771-bd16-429b-96c6-cf2e38be6836", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bffc5dd46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d", Pod:"calico-apiserver-bffc5dd46-5tznx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc4bcdcf13f", MAC:"26:cd:cc:26:bb:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:49.842895 env[1837]: 2024-12-13 14:15:49.822 [INFO][4875] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d" Namespace="calico-apiserver" Pod="calico-apiserver-bffc5dd46-5tznx" WorkloadEndpoint="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:15:49.877225 kernel: audit: type=1325 audit(1734099349.856:417): table=filter:111 family=2 entries=46 op=nft_register_chain pid=4907 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:15:49.877964 kernel: audit: type=1300 audit(1734099349.856:417): arch=c00000b7 syscall=211 success=yes exit=23892 a0=3 a1=ffffcfcfb180 a2=0 a3=ffffbe6a4fa8 items=0 ppid=4142 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:49.856000 audit[4907]: NETFILTER_CFG table=filter:111 family=2 entries=46 op=nft_register_chain pid=4907 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:15:49.856000 audit[4907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23892 a0=3 a1=ffffcfcfb180 a2=0 a3=ffffbe6a4fa8 items=0 ppid=4142 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:49.895245 kernel: audit: type=1327 audit(1734099349.856:417): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:15:49.856000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:15:49.932936 env[1837]: time="2024-12-13T14:15:49.932882018Z" level=info msg="StopPodSandbox for \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\"" Dec 13 14:15:49.992372 env[1837]: time="2024-12-13T14:15:49.976886586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:49.992372 env[1837]: time="2024-12-13T14:15:49.977022078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:49.992372 env[1837]: time="2024-12-13T14:15:49.977067746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:49.992372 env[1837]: time="2024-12-13T14:15:49.977322161Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d pid=4928 runtime=io.containerd.runc.v2 Dec 13 14:15:50.014527 sshd[4865]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:50.020000 audit[4865]: USER_END pid=4865 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:50.020000 audit[4865]: CRED_DISP pid=4865 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:50.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.26.163:22-139.178.89.65:43152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:50.026077 systemd[1]: sshd@7-172.31.26.163:22-139.178.89.65:43152.service: Deactivated successfully. Dec 13 14:15:50.029911 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:15:50.035833 systemd-logind[1829]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:15:50.043225 systemd-logind[1829]: Removed session 8. Dec 13 14:15:50.104479 systemd-networkd[1507]: cali46c5baa8baa: Gained IPv6LL Dec 13 14:15:50.418000 audit[4977]: NETFILTER_CFG table=filter:112 family=2 entries=10 op=nft_register_rule pid=4977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:50.418000 audit[4977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffcf78bda0 a2=0 a3=1 items=0 ppid=3178 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:50.418000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:50.445000 audit[4977]: NETFILTER_CFG table=nat:113 family=2 entries=56 op=nft_register_chain pid=4977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:50.445000 audit[4977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffcf78bda0 a2=0 a3=1 items=0 ppid=3178 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:50.445000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:50.454229 env[1837]: 2024-12-13 14:15:50.288 [INFO][4958] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Dec 13 14:15:50.454229 env[1837]: 2024-12-13 14:15:50.290 [INFO][4958] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" iface="eth0" netns="/var/run/netns/cni-2097eec9-981d-404c-0285-2a632d7a4515" Dec 13 14:15:50.454229 env[1837]: 2024-12-13 14:15:50.291 [INFO][4958] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" iface="eth0" netns="/var/run/netns/cni-2097eec9-981d-404c-0285-2a632d7a4515" Dec 13 14:15:50.454229 env[1837]: 2024-12-13 14:15:50.293 [INFO][4958] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" iface="eth0" netns="/var/run/netns/cni-2097eec9-981d-404c-0285-2a632d7a4515" Dec 13 14:15:50.454229 env[1837]: 2024-12-13 14:15:50.293 [INFO][4958] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Dec 13 14:15:50.454229 env[1837]: 2024-12-13 14:15:50.293 [INFO][4958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Dec 13 14:15:50.454229 env[1837]: 2024-12-13 14:15:50.428 [INFO][4971] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" HandleID="k8s-pod-network.d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Workload="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:50.454229 env[1837]: 2024-12-13 14:15:50.429 [INFO][4971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:50.454229 env[1837]: 2024-12-13 14:15:50.429 [INFO][4971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:50.454229 env[1837]: 2024-12-13 14:15:50.445 [WARNING][4971] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" HandleID="k8s-pod-network.d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Workload="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:50.454229 env[1837]: 2024-12-13 14:15:50.445 [INFO][4971] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" HandleID="k8s-pod-network.d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Workload="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:50.454229 env[1837]: 2024-12-13 14:15:50.449 [INFO][4971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:50.454229 env[1837]: 2024-12-13 14:15:50.451 [INFO][4958] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Dec 13 14:15:50.456003 env[1837]: time="2024-12-13T14:15:50.455948910Z" level=info msg="TearDown network for sandbox \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\" successfully" Dec 13 14:15:50.456155 env[1837]: time="2024-12-13T14:15:50.456121612Z" level=info msg="StopPodSandbox for \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\" returns successfully" Dec 13 14:15:50.457574 env[1837]: time="2024-12-13T14:15:50.457519921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79sgh,Uid:83621a84-eb8a-4acb-be6b-37240d10ca28,Namespace:calico-system,Attempt:1,}" Dec 13 14:15:50.470115 systemd[1]: run-netns-cni\x2d2097eec9\x2d981d\x2d404c\x2d0285\x2d2a632d7a4515.mount: Deactivated successfully. Dec 13 14:15:50.473778 env[1837]: time="2024-12-13T14:15:50.473693971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bffc5dd46-5tznx,Uid:29176771-bd16-429b-96c6-cf2e38be6836,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d\"" Dec 13 14:15:50.873230 systemd-networkd[1507]: calib6b943e0f60: Link UP Dec 13 14:15:50.881447 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:15:50.881597 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib6b943e0f60: link becomes ready Dec 13 14:15:50.880171 systemd-networkd[1507]: calib6b943e0f60: Gained carrier Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.672 [INFO][4988] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0 csi-node-driver- calico-system 83621a84-eb8a-4acb-be6b-37240d10ca28 862 0 2024-12-13 14:15:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-26-163 csi-node-driver-79sgh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib6b943e0f60 [] []}} ContainerID="c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" Namespace="calico-system" Pod="csi-node-driver-79sgh" WorkloadEndpoint="ip--172--31--26--163-k8s-csi--node--driver--79sgh-" Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.672 [INFO][4988] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" Namespace="calico-system" Pod="csi-node-driver-79sgh" WorkloadEndpoint="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.756 [INFO][5000] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" HandleID="k8s-pod-network.c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" Workload="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.788 [INFO][5000] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" HandleID="k8s-pod-network.c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" Workload="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a0fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-163", "pod":"csi-node-driver-79sgh", "timestamp":"2024-12-13 14:15:50.756075196 +0000 UTC"}, Hostname:"ip-172-31-26-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.788 [INFO][5000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.788 [INFO][5000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.788 [INFO][5000] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-163' Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.791 [INFO][5000] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" host="ip-172-31-26-163" Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.799 [INFO][5000] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-163" Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.813 [INFO][5000] ipam/ipam.go 489: Trying affinity for 192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.816 [INFO][5000] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.822 [INFO][5000] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ip-172-31-26-163" Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.822 [INFO][5000] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" host="ip-172-31-26-163" Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.837 [INFO][5000] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4 Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.847 [INFO][5000] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" host="ip-172-31-26-163" Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.865 [INFO][5000] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.134/26] block=192.168.120.128/26 handle="k8s-pod-network.c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" host="ip-172-31-26-163" Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.865 [INFO][5000] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.134/26] handle="k8s-pod-network.c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" host="ip-172-31-26-163" Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.865 [INFO][5000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:50.915174 env[1837]: 2024-12-13 14:15:50.865 [INFO][5000] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.134/26] IPv6=[] ContainerID="c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" HandleID="k8s-pod-network.c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" Workload="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:50.916510 env[1837]: 2024-12-13 14:15:50.868 [INFO][4988] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" Namespace="calico-system" Pod="csi-node-driver-79sgh" WorkloadEndpoint="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83621a84-eb8a-4acb-be6b-37240d10ca28", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"", Pod:"csi-node-driver-79sgh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib6b943e0f60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:50.916510 env[1837]: 2024-12-13 14:15:50.868 [INFO][4988] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.134/32] ContainerID="c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" Namespace="calico-system" Pod="csi-node-driver-79sgh" WorkloadEndpoint="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:50.916510 env[1837]: 2024-12-13 14:15:50.869 [INFO][4988] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6b943e0f60 ContainerID="c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" Namespace="calico-system" Pod="csi-node-driver-79sgh" WorkloadEndpoint="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:50.916510 env[1837]: 2024-12-13 14:15:50.881 [INFO][4988] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" Namespace="calico-system" Pod="csi-node-driver-79sgh" WorkloadEndpoint="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:50.916510 env[1837]: 2024-12-13 14:15:50.882 [INFO][4988] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" Namespace="calico-system" Pod="csi-node-driver-79sgh" WorkloadEndpoint="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83621a84-eb8a-4acb-be6b-37240d10ca28", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4", Pod:"csi-node-driver-79sgh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib6b943e0f60", MAC:"ea:44:1b:98:c7:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:50.916510 env[1837]: 2024-12-13 14:15:50.908 [INFO][4988] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4" Namespace="calico-system" Pod="csi-node-driver-79sgh" WorkloadEndpoint="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:50.936694 systemd-networkd[1507]: calidc4bcdcf13f: Gained IPv6LL Dec 13 14:15:50.959911 env[1837]: time="2024-12-13T14:15:50.959854075Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:50.966734 env[1837]: time="2024-12-13T14:15:50.966677823Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:50.972306 env[1837]: time="2024-12-13T14:15:50.972250982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:50.978371 env[1837]: time="2024-12-13T14:15:50.978312143Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:50.979151 env[1837]: time="2024-12-13T14:15:50.979098152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 14:15:51.001640 env[1837]: time="2024-12-13T14:15:51.001566149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 14:15:51.032000 audit[5028]: NETFILTER_CFG table=filter:114 family=2 entries=50 op=nft_register_chain pid=5028 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:15:51.032000 audit[5028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23392 a0=3 a1=ffffcaae3ce0 a2=0 a3=ffffa4f9afa8 items=0 ppid=4142 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:51.032000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:15:51.040423 env[1837]: time="2024-12-13T14:15:51.040369736Z" level=info msg="CreateContainer within sandbox \"7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 14:15:51.051716 env[1837]: time="2024-12-13T14:15:51.047304332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:51.051716 env[1837]: time="2024-12-13T14:15:51.051386656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:51.051716 env[1837]: time="2024-12-13T14:15:51.051415454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:51.052225 env[1837]: time="2024-12-13T14:15:51.052100854Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4 pid=5030 runtime=io.containerd.runc.v2 Dec 13 14:15:51.168381 env[1837]: time="2024-12-13T14:15:51.168323998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79sgh,Uid:83621a84-eb8a-4acb-be6b-37240d10ca28,Namespace:calico-system,Attempt:1,} returns sandbox id \"c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4\"" Dec 13 14:15:51.171867 env[1837]: time="2024-12-13T14:15:51.171810734Z" level=info msg="CreateContainer within sandbox \"7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8e9e3442e910e220d8a9f2d36a9de38d4e0db1aa66f9bc651ffdead7b3187ced\"" Dec 13 14:15:51.173347 env[1837]: time="2024-12-13T14:15:51.173297580Z" level=info msg="StartContainer for \"8e9e3442e910e220d8a9f2d36a9de38d4e0db1aa66f9bc651ffdead7b3187ced\"" Dec 13 14:15:51.332018 env[1837]: time="2024-12-13T14:15:51.331897128Z" level=info msg="StartContainer for \"8e9e3442e910e220d8a9f2d36a9de38d4e0db1aa66f9bc651ffdead7b3187ced\" returns successfully" Dec 13 14:15:52.022991 systemd-networkd[1507]: calib6b943e0f60: Gained IPv6LL Dec 13 14:15:52.384219 systemd[1]: run-containerd-runc-k8s.io-8e9e3442e910e220d8a9f2d36a9de38d4e0db1aa66f9bc651ffdead7b3187ced-runc.NPYGBn.mount: Deactivated successfully. Dec 13 14:15:52.551876 kubelet[2999]: I1213 14:15:52.551835 2999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54654bf745-9fq4l" podStartSLOduration=27.698549596 podStartE2EDuration="32.551773543s" podCreationTimestamp="2024-12-13 14:15:20 +0000 UTC" firstStartedPulling="2024-12-13 14:15:46.129763515 +0000 UTC m=+48.634369271" lastFinishedPulling="2024-12-13 14:15:50.98298745 +0000 UTC m=+53.487593218" observedRunningTime="2024-12-13 14:15:52.32566889 +0000 UTC m=+54.830274658" watchObservedRunningTime="2024-12-13 14:15:52.551773543 +0000 UTC m=+55.056379323" Dec 13 14:15:54.012188 env[1837]: time="2024-12-13T14:15:54.012112993Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:54.016833 env[1837]: time="2024-12-13T14:15:54.016710399Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:54.021358 env[1837]: time="2024-12-13T14:15:54.021287455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:54.025416 env[1837]: time="2024-12-13T14:15:54.025335823Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:54.027206 env[1837]: time="2024-12-13T14:15:54.027139699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 14:15:54.031673 env[1837]: time="2024-12-13T14:15:54.031168124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 14:15:54.036109 env[1837]: time="2024-12-13T14:15:54.036039990Z" level=info msg="CreateContainer within sandbox \"d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:15:54.083521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779500469.mount: Deactivated successfully. Dec 13 14:15:54.088209 env[1837]: time="2024-12-13T14:15:54.088032156Z" level=info msg="CreateContainer within sandbox \"d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8398758e609ce670e8f81f4a3983bec66e3d4cb616658d1677a03a111fe687ab\"" Dec 13 14:15:54.091129 env[1837]: time="2024-12-13T14:15:54.090947442Z" level=info msg="StartContainer for \"8398758e609ce670e8f81f4a3983bec66e3d4cb616658d1677a03a111fe687ab\"" Dec 13 14:15:54.149869 systemd[1]: run-containerd-runc-k8s.io-8398758e609ce670e8f81f4a3983bec66e3d4cb616658d1677a03a111fe687ab-runc.Lq4dFy.mount: Deactivated successfully. Dec 13 14:15:54.247551 env[1837]: time="2024-12-13T14:15:54.247485373Z" level=info msg="StartContainer for \"8398758e609ce670e8f81f4a3983bec66e3d4cb616658d1677a03a111fe687ab\" returns successfully" Dec 13 14:15:54.367000 audit[5162]: NETFILTER_CFG table=filter:115 family=2 entries=10 op=nft_register_rule pid=5162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:54.367000 audit[5162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffff1ac3360 a2=0 a3=1 items=0 ppid=3178 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:54.367000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:54.375000 audit[5162]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=5162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:54.375000 audit[5162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff1ac3360 a2=0 a3=1 items=0 ppid=3178 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:54.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:54.439230 env[1837]: time="2024-12-13T14:15:54.439177758Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:54.448140 env[1837]: time="2024-12-13T14:15:54.448085908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:54.458644 env[1837]: time="2024-12-13T14:15:54.458560326Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:54.469599 env[1837]: time="2024-12-13T14:15:54.469507743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:54.475067 env[1837]: time="2024-12-13T14:15:54.473817443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 14:15:54.480168 env[1837]: time="2024-12-13T14:15:54.480115172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 14:15:54.484303 env[1837]: time="2024-12-13T14:15:54.484246839Z" level=info msg="CreateContainer within sandbox \"0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:15:54.606038 env[1837]: time="2024-12-13T14:15:54.605975822Z" level=info msg="CreateContainer within sandbox \"0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0803a828c7adec88dc1bd772516d5de099aed05646fcddb52bcc74424928f73f\"" Dec 13 14:15:54.610051 env[1837]: time="2024-12-13T14:15:54.609742555Z" level=info msg="StartContainer for \"0803a828c7adec88dc1bd772516d5de099aed05646fcddb52bcc74424928f73f\"" Dec 13 14:15:54.780948 env[1837]: time="2024-12-13T14:15:54.780806642Z" level=info msg="StartContainer for \"0803a828c7adec88dc1bd772516d5de099aed05646fcddb52bcc74424928f73f\" returns successfully" Dec 13 14:15:55.039470 systemd[1]: Started sshd@8-172.31.26.163:22-139.178.89.65:43164.service. Dec 13 14:15:55.055655 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 13 14:15:55.055802 kernel: audit: type=1130 audit(1734099355.039:426): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.26.163:22-139.178.89.65:43164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:55.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.26.163:22-139.178.89.65:43164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:55.265000 audit[5200]: USER_ACCT pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:55.275495 sshd[5200]: Accepted publickey for core from 139.178.89.65 port 43164 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:55.278976 sshd[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:55.277000 audit[5200]: CRED_ACQ pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:55.295873 kernel: audit: type=1101 audit(1734099355.265:427): pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:55.296013 kernel: audit: type=1103 audit(1734099355.277:428): pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:55.289475 systemd-logind[1829]: New session 9 of user core. Dec 13 14:15:55.290706 systemd[1]: Started session-9.scope. Dec 13 14:15:55.319129 kubelet[2999]: I1213 14:15:55.318428 2999 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:15:55.327970 kernel: audit: type=1006 audit(1734099355.277:429): pid=5200 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 13 14:15:55.277000 audit[5200]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeb67f2f0 a2=3 a3=1 items=0 ppid=1 pid=5200 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:55.339513 kernel: audit: type=1300 audit(1734099355.277:429): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeb67f2f0 a2=3 a3=1 items=0 ppid=1 pid=5200 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:55.339615 kernel: audit: type=1327 audit(1734099355.277:429): proctitle=737368643A20636F7265205B707269765D Dec 13 14:15:55.277000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:15:55.306000 audit[5200]: USER_START pid=5200 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:55.354188 kernel: audit: type=1105 audit(1734099355.306:430): pid=5200 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:55.354930 kubelet[2999]: I1213 14:15:55.354872 2999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bffc5dd46-2qrs2" podStartSLOduration=30.95315116 podStartE2EDuration="36.354809758s" podCreationTimestamp="2024-12-13 14:15:19 +0000 UTC" firstStartedPulling="2024-12-13 14:15:48.627162394 +0000 UTC m=+51.131768150" lastFinishedPulling="2024-12-13 14:15:54.02882092 +0000 UTC m=+56.533426748" observedRunningTime="2024-12-13 14:15:54.327425737 +0000 UTC m=+56.832031517" watchObservedRunningTime="2024-12-13 14:15:55.354809758 +0000 UTC m=+57.859415514" Dec 13 14:15:55.314000 audit[5203]: CRED_ACQ pid=5203 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:55.365073 kernel: audit: type=1103 audit(1734099355.314:431): pid=5203 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:55.377000 audit[5205]: NETFILTER_CFG table=filter:117 family=2 entries=10 op=nft_register_rule pid=5205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:55.377000 audit[5205]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffff80867a0 a2=0 a3=1 items=0 ppid=3178 pid=5205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:55.395116 kernel: audit: type=1325 audit(1734099355.377:432): table=filter:117 family=2 entries=10 op=nft_register_rule pid=5205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:55.395236 kernel: audit: type=1300 audit(1734099355.377:432): arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffff80867a0 a2=0 a3=1 items=0 ppid=3178 pid=5205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:55.377000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:55.397000 audit[5205]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=5205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:15:55.397000 audit[5205]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff80867a0 a2=0 a3=1 items=0 ppid=3178 pid=5205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:55.397000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:15:55.661356 sshd[5200]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:55.662000 audit[5200]: USER_END pid=5200 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:55.662000 audit[5200]: CRED_DISP pid=5200 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:15:55.666387 systemd-logind[1829]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:15:55.668194 systemd[1]: sshd@8-172.31.26.163:22-139.178.89.65:43164.service: Deactivated successfully. Dec 13 14:15:55.669612 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:15:55.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.26.163:22-139.178.89.65:43164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:55.672023 systemd-logind[1829]: Removed session 9. Dec 13 14:15:56.263741 env[1837]: time="2024-12-13T14:15:56.263670734Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:56.267228 env[1837]: time="2024-12-13T14:15:56.267162822Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:56.270323 env[1837]: time="2024-12-13T14:15:56.270271134Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:56.273298 env[1837]: time="2024-12-13T14:15:56.273237553Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:56.274451 env[1837]: time="2024-12-13T14:15:56.274396263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 14:15:56.279121 env[1837]: time="2024-12-13T14:15:56.278967710Z" level=info msg="CreateContainer within sandbox \"c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 14:15:56.316068 env[1837]: time="2024-12-13T14:15:56.315993339Z" level=info msg="CreateContainer within sandbox \"c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3af22d123d5ef6d55275565f560cf15e319381fd19503cbf40c72afd845eb84f\"" Dec 13 14:15:56.317008 env[1837]: time="2024-12-13T14:15:56.316949283Z" level=info msg="StartContainer for \"3af22d123d5ef6d55275565f560cf15e319381fd19503cbf40c72afd845eb84f\"" Dec 13 14:15:56.324850 kubelet[2999]: I1213 14:15:56.324480 2999 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:15:56.421565 systemd[1]: run-containerd-runc-k8s.io-3af22d123d5ef6d55275565f560cf15e319381fd19503cbf40c72afd845eb84f-runc.m7q8qq.mount: Deactivated successfully. Dec 13 14:15:56.585839 env[1837]: time="2024-12-13T14:15:56.585660911Z" level=info msg="StartContainer for \"3af22d123d5ef6d55275565f560cf15e319381fd19503cbf40c72afd845eb84f\" returns successfully" Dec 13 14:15:56.588324 env[1837]: time="2024-12-13T14:15:56.588210495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 14:15:57.763487 env[1837]: time="2024-12-13T14:15:57.763432675Z" level=info msg="StopPodSandbox for \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\"" Dec 13 14:15:58.192064 env[1837]: 2024-12-13 14:15:58.010 [WARNING][5271] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83621a84-eb8a-4acb-be6b-37240d10ca28", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4", Pod:"csi-node-driver-79sgh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib6b943e0f60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:58.192064 env[1837]: 2024-12-13 14:15:58.011 [INFO][5271] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Dec 13 14:15:58.192064 env[1837]: 2024-12-13 14:15:58.011 [INFO][5271] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" iface="eth0" netns="" Dec 13 14:15:58.192064 env[1837]: 2024-12-13 14:15:58.011 [INFO][5271] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Dec 13 14:15:58.192064 env[1837]: 2024-12-13 14:15:58.011 [INFO][5271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Dec 13 14:15:58.192064 env[1837]: 2024-12-13 14:15:58.158 [INFO][5279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" HandleID="k8s-pod-network.d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Workload="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:58.192064 env[1837]: 2024-12-13 14:15:58.159 [INFO][5279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:58.192064 env[1837]: 2024-12-13 14:15:58.159 [INFO][5279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:58.192064 env[1837]: 2024-12-13 14:15:58.171 [WARNING][5279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" HandleID="k8s-pod-network.d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Workload="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:58.192064 env[1837]: 2024-12-13 14:15:58.171 [INFO][5279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" HandleID="k8s-pod-network.d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Workload="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:58.192064 env[1837]: 2024-12-13 14:15:58.174 [INFO][5279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:58.192064 env[1837]: 2024-12-13 14:15:58.180 [INFO][5271] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Dec 13 14:15:58.192064 env[1837]: time="2024-12-13T14:15:58.184178021Z" level=info msg="TearDown network for sandbox \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\" successfully" Dec 13 14:15:58.192064 env[1837]: time="2024-12-13T14:15:58.184324511Z" level=info msg="StopPodSandbox for \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\" returns successfully" Dec 13 14:15:58.204673 env[1837]: time="2024-12-13T14:15:58.195804170Z" level=info msg="RemovePodSandbox for \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\"" Dec 13 14:15:58.204673 env[1837]: time="2024-12-13T14:15:58.195878459Z" level=info msg="Forcibly stopping sandbox \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\"" Dec 13 14:15:58.421695 env[1837]: 2024-12-13 14:15:58.350 [WARNING][5300] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83621a84-eb8a-4acb-be6b-37240d10ca28", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4", Pod:"csi-node-driver-79sgh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib6b943e0f60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:58.421695 env[1837]: 2024-12-13 14:15:58.351 [INFO][5300] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Dec 13 14:15:58.421695 env[1837]: 2024-12-13 14:15:58.351 [INFO][5300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" iface="eth0" netns="" Dec 13 14:15:58.421695 env[1837]: 2024-12-13 14:15:58.351 [INFO][5300] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Dec 13 14:15:58.421695 env[1837]: 2024-12-13 14:15:58.351 [INFO][5300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Dec 13 14:15:58.421695 env[1837]: 2024-12-13 14:15:58.398 [INFO][5306] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" HandleID="k8s-pod-network.d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Workload="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:58.421695 env[1837]: 2024-12-13 14:15:58.398 [INFO][5306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:58.421695 env[1837]: 2024-12-13 14:15:58.399 [INFO][5306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:58.421695 env[1837]: 2024-12-13 14:15:58.412 [WARNING][5306] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" HandleID="k8s-pod-network.d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Workload="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:58.421695 env[1837]: 2024-12-13 14:15:58.412 [INFO][5306] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" HandleID="k8s-pod-network.d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Workload="ip--172--31--26--163-k8s-csi--node--driver--79sgh-eth0" Dec 13 14:15:58.421695 env[1837]: 2024-12-13 14:15:58.416 [INFO][5306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:58.421695 env[1837]: 2024-12-13 14:15:58.419 [INFO][5300] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14" Dec 13 14:15:58.422827 env[1837]: time="2024-12-13T14:15:58.421722254Z" level=info msg="TearDown network for sandbox \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\" successfully" Dec 13 14:15:58.427733 env[1837]: time="2024-12-13T14:15:58.427663831Z" level=info msg="RemovePodSandbox \"d58968af030e8a49700d14cfc96971877872cc840ed0aa1ea4e66b6064492d14\" returns successfully" Dec 13 14:15:58.428598 env[1837]: time="2024-12-13T14:15:58.428542194Z" level=info msg="StopPodSandbox for \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\"" Dec 13 14:15:58.456918 env[1837]: time="2024-12-13T14:15:58.456767230Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:58.461231 env[1837]: time="2024-12-13T14:15:58.461172199Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:58.464950 env[1837]: time="2024-12-13T14:15:58.464894276Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:58.469060 env[1837]: time="2024-12-13T14:15:58.468989850Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:58.470452 env[1837]: time="2024-12-13T14:15:58.470401615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 14:15:58.476259 env[1837]: time="2024-12-13T14:15:58.476147181Z" level=info msg="CreateContainer within sandbox \"c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 14:15:58.517326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2263157676.mount: Deactivated successfully. Dec 13 14:15:58.530110 env[1837]: time="2024-12-13T14:15:58.527283045Z" level=info msg="CreateContainer within sandbox \"c7b67fed57c2bb29884006a91a007f4caea0f8f29b42dfe7366bad18b45410f4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"57752c30c24be3342ae2ff405d868df56c1109089fa3fc7238966d4122141de5\"" Dec 13 14:15:58.530747 env[1837]: time="2024-12-13T14:15:58.530688011Z" level=info msg="StartContainer for \"57752c30c24be3342ae2ff405d868df56c1109089fa3fc7238966d4122141de5\"" Dec 13 14:15:58.701108 env[1837]: 2024-12-13 14:15:58.556 [WARNING][5327] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0", GenerateName:"calico-apiserver-bffc5dd46-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef634f66-b7a3-4b1f-99b3-8db2e225f26a", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bffc5dd46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e", Pod:"calico-apiserver-bffc5dd46-2qrs2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34aa87130ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:58.701108 env[1837]: 2024-12-13 14:15:58.559 [INFO][5327] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Dec 13 14:15:58.701108 env[1837]: 2024-12-13 14:15:58.559 [INFO][5327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" iface="eth0" netns="" Dec 13 14:15:58.701108 env[1837]: 2024-12-13 14:15:58.559 [INFO][5327] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Dec 13 14:15:58.701108 env[1837]: 2024-12-13 14:15:58.559 [INFO][5327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Dec 13 14:15:58.701108 env[1837]: 2024-12-13 14:15:58.671 [INFO][5340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" HandleID="k8s-pod-network.b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:58.701108 env[1837]: 2024-12-13 14:15:58.672 [INFO][5340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:58.701108 env[1837]: 2024-12-13 14:15:58.672 [INFO][5340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:58.701108 env[1837]: 2024-12-13 14:15:58.693 [WARNING][5340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" HandleID="k8s-pod-network.b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:58.701108 env[1837]: 2024-12-13 14:15:58.693 [INFO][5340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" HandleID="k8s-pod-network.b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:58.701108 env[1837]: 2024-12-13 14:15:58.696 [INFO][5340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:58.701108 env[1837]: 2024-12-13 14:15:58.698 [INFO][5327] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Dec 13 14:15:58.702183 env[1837]: time="2024-12-13T14:15:58.701095608Z" level=info msg="TearDown network for sandbox \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\" successfully" Dec 13 14:15:58.702183 env[1837]: time="2024-12-13T14:15:58.701146666Z" level=info msg="StopPodSandbox for \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\" returns successfully" Dec 13 14:15:58.703470 env[1837]: time="2024-12-13T14:15:58.703420296Z" level=info msg="RemovePodSandbox for \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\"" Dec 13 14:15:58.703771 env[1837]: time="2024-12-13T14:15:58.703706940Z" level=info msg="Forcibly stopping sandbox \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\"" Dec 13 14:15:58.793656 env[1837]: time="2024-12-13T14:15:58.793465839Z" level=info msg="StartContainer for \"57752c30c24be3342ae2ff405d868df56c1109089fa3fc7238966d4122141de5\" returns successfully" Dec 13 14:15:58.933431 env[1837]: 2024-12-13 14:15:58.865 [WARNING][5375] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0", GenerateName:"calico-apiserver-bffc5dd46-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef634f66-b7a3-4b1f-99b3-8db2e225f26a", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bffc5dd46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"d008068efc2aabdb93344561c6e26e4d710313e8c6431f38987590fa15d9ea6e", Pod:"calico-apiserver-bffc5dd46-2qrs2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34aa87130ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:58.933431 env[1837]: 2024-12-13 14:15:58.865 [INFO][5375] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Dec 13 14:15:58.933431 env[1837]: 2024-12-13 14:15:58.865 [INFO][5375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" iface="eth0" netns="" Dec 13 14:15:58.933431 env[1837]: 2024-12-13 14:15:58.865 [INFO][5375] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Dec 13 14:15:58.933431 env[1837]: 2024-12-13 14:15:58.865 [INFO][5375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Dec 13 14:15:58.933431 env[1837]: 2024-12-13 14:15:58.910 [INFO][5392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" HandleID="k8s-pod-network.b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:58.933431 env[1837]: 2024-12-13 14:15:58.910 [INFO][5392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:58.933431 env[1837]: 2024-12-13 14:15:58.910 [INFO][5392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:58.933431 env[1837]: 2024-12-13 14:15:58.925 [WARNING][5392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" HandleID="k8s-pod-network.b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:58.933431 env[1837]: 2024-12-13 14:15:58.925 [INFO][5392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" HandleID="k8s-pod-network.b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--2qrs2-eth0" Dec 13 14:15:58.933431 env[1837]: 2024-12-13 14:15:58.928 [INFO][5392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:58.933431 env[1837]: 2024-12-13 14:15:58.931 [INFO][5375] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb" Dec 13 14:15:58.934604 env[1837]: time="2024-12-13T14:15:58.934553206Z" level=info msg="TearDown network for sandbox \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\" successfully" Dec 13 14:15:58.940667 env[1837]: time="2024-12-13T14:15:58.940528946Z" level=info msg="RemovePodSandbox \"b9c742d4a2f8ad78d28c9d89bd259efce095c9c7e7f3fddcf02c853e57d343bb\" returns successfully" Dec 13 14:15:58.941937 env[1837]: time="2024-12-13T14:15:58.941893001Z" level=info msg="StopPodSandbox for \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\"" Dec 13 14:15:59.083375 env[1837]: 2024-12-13 14:15:59.016 [WARNING][5412] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d93f59e9-cea4-4e42-99d4-3d89f412196e", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6", Pod:"coredns-76f75df574-nxsxq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c5baa8baa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:59.083375 env[1837]: 2024-12-13 14:15:59.017 [INFO][5412] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Dec 13 14:15:59.083375 env[1837]: 2024-12-13 14:15:59.017 [INFO][5412] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" iface="eth0" netns="" Dec 13 14:15:59.083375 env[1837]: 2024-12-13 14:15:59.017 [INFO][5412] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Dec 13 14:15:59.083375 env[1837]: 2024-12-13 14:15:59.017 [INFO][5412] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Dec 13 14:15:59.083375 env[1837]: 2024-12-13 14:15:59.060 [INFO][5418] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" HandleID="k8s-pod-network.9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:59.083375 env[1837]: 2024-12-13 14:15:59.060 [INFO][5418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:59.083375 env[1837]: 2024-12-13 14:15:59.060 [INFO][5418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:59.083375 env[1837]: 2024-12-13 14:15:59.074 [WARNING][5418] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" HandleID="k8s-pod-network.9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:59.083375 env[1837]: 2024-12-13 14:15:59.074 [INFO][5418] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" HandleID="k8s-pod-network.9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:59.083375 env[1837]: 2024-12-13 14:15:59.077 [INFO][5418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:59.083375 env[1837]: 2024-12-13 14:15:59.080 [INFO][5412] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Dec 13 14:15:59.085023 env[1837]: time="2024-12-13T14:15:59.084748730Z" level=info msg="TearDown network for sandbox \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\" successfully" Dec 13 14:15:59.085023 env[1837]: time="2024-12-13T14:15:59.084799873Z" level=info msg="StopPodSandbox for \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\" returns successfully" Dec 13 14:15:59.085864 env[1837]: time="2024-12-13T14:15:59.085824706Z" level=info msg="RemovePodSandbox for \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\"" Dec 13 14:15:59.086326 env[1837]: time="2024-12-13T14:15:59.086250606Z" level=info msg="Forcibly stopping sandbox \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\"" Dec 13 14:15:59.096694 kubelet[2999]: I1213 14:15:59.096579 2999 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 14:15:59.096694 kubelet[2999]: I1213 14:15:59.096699 2999 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 14:15:59.258704 env[1837]: 2024-12-13 14:15:59.185 [WARNING][5437] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d93f59e9-cea4-4e42-99d4-3d89f412196e", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"02ab28f184cafe905f744593b6916ddccf94b8353c30947472acb92097c7bbc6", Pod:"coredns-76f75df574-nxsxq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c5baa8baa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:59.258704 env[1837]: 2024-12-13 14:15:59.185 [INFO][5437] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Dec 13 14:15:59.258704 env[1837]: 2024-12-13 14:15:59.185 [INFO][5437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" iface="eth0" netns="" Dec 13 14:15:59.258704 env[1837]: 2024-12-13 14:15:59.185 [INFO][5437] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Dec 13 14:15:59.258704 env[1837]: 2024-12-13 14:15:59.186 [INFO][5437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Dec 13 14:15:59.258704 env[1837]: 2024-12-13 14:15:59.227 [INFO][5443] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" HandleID="k8s-pod-network.9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:59.258704 env[1837]: 2024-12-13 14:15:59.227 [INFO][5443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:59.258704 env[1837]: 2024-12-13 14:15:59.227 [INFO][5443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:59.258704 env[1837]: 2024-12-13 14:15:59.240 [WARNING][5443] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" HandleID="k8s-pod-network.9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:59.258704 env[1837]: 2024-12-13 14:15:59.242 [INFO][5443] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" HandleID="k8s-pod-network.9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--nxsxq-eth0" Dec 13 14:15:59.258704 env[1837]: 2024-12-13 14:15:59.246 [INFO][5443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:59.258704 env[1837]: 2024-12-13 14:15:59.254 [INFO][5437] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9" Dec 13 14:15:59.259866 env[1837]: time="2024-12-13T14:15:59.259802346Z" level=info msg="TearDown network for sandbox \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\" successfully" Dec 13 14:15:59.265212 env[1837]: time="2024-12-13T14:15:59.265156906Z" level=info msg="RemovePodSandbox \"9ac037ace7775c2b707469cfe8f20a9d51cd74458cf9d1d31d568034ddf9ecd9\" returns successfully" Dec 13 14:15:59.266371 env[1837]: time="2024-12-13T14:15:59.266283100Z" level=info msg="StopPodSandbox for \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\"" Dec 13 14:15:59.372673 kubelet[2999]: I1213 14:15:59.370756 2999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bffc5dd46-5tznx" podStartSLOduration=36.373986423 podStartE2EDuration="40.370693318s" podCreationTimestamp="2024-12-13 14:15:19 +0000 UTC" firstStartedPulling="2024-12-13 14:15:50.47916275 +0000 UTC m=+52.983768506" lastFinishedPulling="2024-12-13 14:15:54.475869657 +0000 UTC m=+56.980475401" observedRunningTime="2024-12-13 14:15:55.355566916 +0000 UTC m=+57.860172696" watchObservedRunningTime="2024-12-13 14:15:59.370693318 +0000 UTC m=+61.875299110" Dec 13 14:15:59.437182 env[1837]: 2024-12-13 14:15:59.330 [WARNING][5462] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d999231e-24a7-47cf-8eea-96857833ff01", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427", Pod:"coredns-76f75df574-rbxmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26f07bd83c5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:59.437182 env[1837]: 2024-12-13 14:15:59.331 [INFO][5462] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Dec 13 14:15:59.437182 env[1837]: 2024-12-13 14:15:59.331 [INFO][5462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" iface="eth0" netns="" Dec 13 14:15:59.437182 env[1837]: 2024-12-13 14:15:59.331 [INFO][5462] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Dec 13 14:15:59.437182 env[1837]: 2024-12-13 14:15:59.331 [INFO][5462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Dec 13 14:15:59.437182 env[1837]: 2024-12-13 14:15:59.415 [INFO][5468] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" HandleID="k8s-pod-network.92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:59.437182 env[1837]: 2024-12-13 14:15:59.415 [INFO][5468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:59.437182 env[1837]: 2024-12-13 14:15:59.415 [INFO][5468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:59.437182 env[1837]: 2024-12-13 14:15:59.428 [WARNING][5468] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" HandleID="k8s-pod-network.92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:59.437182 env[1837]: 2024-12-13 14:15:59.428 [INFO][5468] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" HandleID="k8s-pod-network.92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:59.437182 env[1837]: 2024-12-13 14:15:59.431 [INFO][5468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:59.437182 env[1837]: 2024-12-13 14:15:59.433 [INFO][5462] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Dec 13 14:15:59.438297 env[1837]: time="2024-12-13T14:15:59.437231897Z" level=info msg="TearDown network for sandbox \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\" successfully" Dec 13 14:15:59.438297 env[1837]: time="2024-12-13T14:15:59.437283867Z" level=info msg="StopPodSandbox for \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\" returns successfully" Dec 13 14:15:59.439313 env[1837]: time="2024-12-13T14:15:59.439270284Z" level=info msg="RemovePodSandbox for \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\"" Dec 13 14:15:59.439539 env[1837]: time="2024-12-13T14:15:59.439480288Z" level=info msg="Forcibly stopping sandbox \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\"" Dec 13 14:15:59.583404 env[1837]: 2024-12-13 14:15:59.512 [WARNING][5486] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d999231e-24a7-47cf-8eea-96857833ff01", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"c861dc50f697aa85f674702de189a25175926905b8e321c47b57b61bdae6d427", Pod:"coredns-76f75df574-rbxmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26f07bd83c5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:59.583404 env[1837]: 2024-12-13 14:15:59.513 [INFO][5486] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Dec 13 14:15:59.583404 env[1837]: 2024-12-13 14:15:59.513 [INFO][5486] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" iface="eth0" netns="" Dec 13 14:15:59.583404 env[1837]: 2024-12-13 14:15:59.513 [INFO][5486] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Dec 13 14:15:59.583404 env[1837]: 2024-12-13 14:15:59.513 [INFO][5486] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Dec 13 14:15:59.583404 env[1837]: 2024-12-13 14:15:59.555 [INFO][5492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" HandleID="k8s-pod-network.92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:59.583404 env[1837]: 2024-12-13 14:15:59.555 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:59.583404 env[1837]: 2024-12-13 14:15:59.555 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:59.583404 env[1837]: 2024-12-13 14:15:59.576 [WARNING][5492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" HandleID="k8s-pod-network.92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:59.583404 env[1837]: 2024-12-13 14:15:59.576 [INFO][5492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" HandleID="k8s-pod-network.92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Workload="ip--172--31--26--163-k8s-coredns--76f75df574--rbxmj-eth0" Dec 13 14:15:59.583404 env[1837]: 2024-12-13 14:15:59.578 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:59.583404 env[1837]: 2024-12-13 14:15:59.581 [INFO][5486] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673" Dec 13 14:15:59.584520 env[1837]: time="2024-12-13T14:15:59.584469869Z" level=info msg="TearDown network for sandbox \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\" successfully" Dec 13 14:15:59.591591 env[1837]: time="2024-12-13T14:15:59.590825383Z" level=info msg="RemovePodSandbox \"92c7a34910fa1248625c6c50c754a106392bfd626a5be645a882d3cd6837b673\" returns successfully" Dec 13 14:15:59.592325 env[1837]: time="2024-12-13T14:15:59.592282464Z" level=info msg="StopPodSandbox for \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\"" Dec 13 14:15:59.723387 env[1837]: 2024-12-13 14:15:59.658 [WARNING][5510] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0", GenerateName:"calico-kube-controllers-54654bf745-", Namespace:"calico-system", SelfLink:"", UID:"649c8cd1-1016-49a5-85ac-f55023619db6", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54654bf745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050", Pod:"calico-kube-controllers-54654bf745-9fq4l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali008f78e0953", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:59.723387 env[1837]: 2024-12-13 14:15:59.658 [INFO][5510] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Dec 13 14:15:59.723387 env[1837]: 2024-12-13 14:15:59.658 [INFO][5510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" iface="eth0" netns="" Dec 13 14:15:59.723387 env[1837]: 2024-12-13 14:15:59.658 [INFO][5510] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Dec 13 14:15:59.723387 env[1837]: 2024-12-13 14:15:59.658 [INFO][5510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Dec 13 14:15:59.723387 env[1837]: 2024-12-13 14:15:59.699 [INFO][5516] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" HandleID="k8s-pod-network.02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Workload="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:59.723387 env[1837]: 2024-12-13 14:15:59.700 [INFO][5516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:59.723387 env[1837]: 2024-12-13 14:15:59.701 [INFO][5516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:59.723387 env[1837]: 2024-12-13 14:15:59.715 [WARNING][5516] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" HandleID="k8s-pod-network.02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Workload="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:59.723387 env[1837]: 2024-12-13 14:15:59.715 [INFO][5516] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" HandleID="k8s-pod-network.02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Workload="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:59.723387 env[1837]: 2024-12-13 14:15:59.717 [INFO][5516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:59.723387 env[1837]: 2024-12-13 14:15:59.720 [INFO][5510] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Dec 13 14:15:59.725574 env[1837]: time="2024-12-13T14:15:59.725523416Z" level=info msg="TearDown network for sandbox \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\" successfully" Dec 13 14:15:59.725768 env[1837]: time="2024-12-13T14:15:59.725733156Z" level=info msg="StopPodSandbox for \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\" returns successfully" Dec 13 14:15:59.727135 env[1837]: time="2024-12-13T14:15:59.727053035Z" level=info msg="RemovePodSandbox for \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\"" Dec 13 14:15:59.727502 env[1837]: time="2024-12-13T14:15:59.727423809Z" level=info msg="Forcibly stopping sandbox \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\"" Dec 13 14:15:59.871762 env[1837]: 2024-12-13 14:15:59.807 [WARNING][5536] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0", GenerateName:"calico-kube-controllers-54654bf745-", Namespace:"calico-system", SelfLink:"", UID:"649c8cd1-1016-49a5-85ac-f55023619db6", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54654bf745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"7006def5caab23fe965eec24bcf24461b7389d5873b10e57340ca7fe51396050", Pod:"calico-kube-controllers-54654bf745-9fq4l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali008f78e0953", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:15:59.871762 env[1837]: 2024-12-13 14:15:59.807 [INFO][5536] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Dec 13 14:15:59.871762 env[1837]: 2024-12-13 14:15:59.807 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" iface="eth0" netns="" Dec 13 14:15:59.871762 env[1837]: 2024-12-13 14:15:59.807 [INFO][5536] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Dec 13 14:15:59.871762 env[1837]: 2024-12-13 14:15:59.808 [INFO][5536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Dec 13 14:15:59.871762 env[1837]: 2024-12-13 14:15:59.849 [INFO][5542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" HandleID="k8s-pod-network.02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Workload="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:59.871762 env[1837]: 2024-12-13 14:15:59.850 [INFO][5542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:15:59.871762 env[1837]: 2024-12-13 14:15:59.850 [INFO][5542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:15:59.871762 env[1837]: 2024-12-13 14:15:59.863 [WARNING][5542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" HandleID="k8s-pod-network.02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Workload="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:59.871762 env[1837]: 2024-12-13 14:15:59.863 [INFO][5542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" HandleID="k8s-pod-network.02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Workload="ip--172--31--26--163-k8s-calico--kube--controllers--54654bf745--9fq4l-eth0" Dec 13 14:15:59.871762 env[1837]: 2024-12-13 14:15:59.866 [INFO][5542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:15:59.871762 env[1837]: 2024-12-13 14:15:59.868 [INFO][5536] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b" Dec 13 14:15:59.873239 env[1837]: time="2024-12-13T14:15:59.873188908Z" level=info msg="TearDown network for sandbox \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\" successfully" Dec 13 14:15:59.884713 env[1837]: time="2024-12-13T14:15:59.884609505Z" level=info msg="RemovePodSandbox \"02f00241ebf338cc34a262085ab3281e69a63a25d6837d85ee3743fa90dd978b\" returns successfully" Dec 13 14:15:59.885586 env[1837]: time="2024-12-13T14:15:59.885491076Z" level=info msg="StopPodSandbox for \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\"" Dec 13 14:16:00.042203 env[1837]: 2024-12-13 14:15:59.968 [WARNING][5561] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0", GenerateName:"calico-apiserver-bffc5dd46-", Namespace:"calico-apiserver", SelfLink:"", UID:"29176771-bd16-429b-96c6-cf2e38be6836", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bffc5dd46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d", Pod:"calico-apiserver-bffc5dd46-5tznx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc4bcdcf13f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:16:00.042203 env[1837]: 2024-12-13 14:15:59.969 [INFO][5561] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Dec 13 14:16:00.042203 env[1837]: 2024-12-13 14:15:59.969 [INFO][5561] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" iface="eth0" netns="" Dec 13 14:16:00.042203 env[1837]: 2024-12-13 14:15:59.969 [INFO][5561] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Dec 13 14:16:00.042203 env[1837]: 2024-12-13 14:15:59.970 [INFO][5561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Dec 13 14:16:00.042203 env[1837]: 2024-12-13 14:16:00.020 [INFO][5567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" HandleID="k8s-pod-network.5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:16:00.042203 env[1837]: 2024-12-13 14:16:00.020 [INFO][5567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:16:00.042203 env[1837]: 2024-12-13 14:16:00.020 [INFO][5567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:16:00.042203 env[1837]: 2024-12-13 14:16:00.033 [WARNING][5567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" HandleID="k8s-pod-network.5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:16:00.042203 env[1837]: 2024-12-13 14:16:00.033 [INFO][5567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" HandleID="k8s-pod-network.5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:16:00.042203 env[1837]: 2024-12-13 14:16:00.036 [INFO][5567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:16:00.042203 env[1837]: 2024-12-13 14:16:00.038 [INFO][5561] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Dec 13 14:16:00.042203 env[1837]: time="2024-12-13T14:16:00.042116255Z" level=info msg="TearDown network for sandbox \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\" successfully" Dec 13 14:16:00.044114 env[1837]: time="2024-12-13T14:16:00.043538135Z" level=info msg="StopPodSandbox for \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\" returns successfully" Dec 13 14:16:00.044336 env[1837]: time="2024-12-13T14:16:00.044287066Z" level=info msg="RemovePodSandbox for \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\"" Dec 13 14:16:00.044414 env[1837]: time="2024-12-13T14:16:00.044346632Z" level=info msg="Forcibly stopping sandbox \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\"" Dec 13 14:16:00.180994 env[1837]: 2024-12-13 14:16:00.116 [WARNING][5586] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0", GenerateName:"calico-apiserver-bffc5dd46-", Namespace:"calico-apiserver", SelfLink:"", UID:"29176771-bd16-429b-96c6-cf2e38be6836", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 15, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bffc5dd46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-163", ContainerID:"0edf30dcc14dd38363fba7ac237e9ac9a5fcf692c167de165d5eb5124079b17d", Pod:"calico-apiserver-bffc5dd46-5tznx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc4bcdcf13f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:16:00.180994 env[1837]: 2024-12-13 14:16:00.116 [INFO][5586] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Dec 13 14:16:00.180994 env[1837]: 2024-12-13 14:16:00.116 [INFO][5586] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" iface="eth0" netns="" Dec 13 14:16:00.180994 env[1837]: 2024-12-13 14:16:00.116 [INFO][5586] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Dec 13 14:16:00.180994 env[1837]: 2024-12-13 14:16:00.116 [INFO][5586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Dec 13 14:16:00.180994 env[1837]: 2024-12-13 14:16:00.158 [INFO][5592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" HandleID="k8s-pod-network.5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:16:00.180994 env[1837]: 2024-12-13 14:16:00.158 [INFO][5592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:16:00.180994 env[1837]: 2024-12-13 14:16:00.159 [INFO][5592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:16:00.180994 env[1837]: 2024-12-13 14:16:00.172 [WARNING][5592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" HandleID="k8s-pod-network.5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:16:00.180994 env[1837]: 2024-12-13 14:16:00.172 [INFO][5592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" HandleID="k8s-pod-network.5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Workload="ip--172--31--26--163-k8s-calico--apiserver--bffc5dd46--5tznx-eth0" Dec 13 14:16:00.180994 env[1837]: 2024-12-13 14:16:00.175 [INFO][5592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:16:00.180994 env[1837]: 2024-12-13 14:16:00.177 [INFO][5586] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699" Dec 13 14:16:00.182262 env[1837]: time="2024-12-13T14:16:00.181199696Z" level=info msg="TearDown network for sandbox \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\" successfully" Dec 13 14:16:00.186578 env[1837]: time="2024-12-13T14:16:00.186489732Z" level=info msg="RemovePodSandbox \"5bbe9087dc1d6102a029feeaf0da6b2c57fcd23bbd65f6fc36aa1c5ae8bd8699\" returns successfully" Dec 13 14:16:00.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.26.163:22-139.178.89.65:49818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:00.687994 systemd[1]: Started sshd@9-172.31.26.163:22-139.178.89.65:49818.service. Dec 13 14:16:00.690327 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 14:16:00.690443 kernel: audit: type=1130 audit(1734099360.686:437): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.26.163:22-139.178.89.65:49818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:00.874000 audit[5599]: USER_ACCT pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:00.880749 sshd[5599]: Accepted publickey for core from 139.178.89.65 port 49818 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:00.886355 sshd[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:00.879000 audit[5599]: CRED_ACQ pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:00.899330 kernel: audit: type=1101 audit(1734099360.874:438): pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:00.899471 kernel: audit: type=1103 audit(1734099360.879:439): pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:00.899526 kernel: audit: type=1006 audit(1734099360.879:440): pid=5599 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 13 14:16:00.898642 systemd[1]: Started session-10.scope. Dec 13 14:16:00.900928 systemd-logind[1829]: New session 10 of user core. Dec 13 14:16:00.879000 audit[5599]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc44930a0 a2=3 a3=1 items=0 ppid=1 pid=5599 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:00.913817 kernel: audit: type=1300 audit(1734099360.879:440): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc44930a0 a2=3 a3=1 items=0 ppid=1 pid=5599 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:00.879000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:00.919389 kernel: audit: type=1327 audit(1734099360.879:440): proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:00.913000 audit[5599]: USER_START pid=5599 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:00.931724 kernel: audit: type=1105 audit(1734099360.913:441): pid=5599 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:00.933743 kernel: audit: type=1103 audit(1734099360.931:442): pid=5602 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:00.931000 audit[5602]: CRED_ACQ pid=5602 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:01.193114 sshd[5599]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:01.193000 audit[5599]: USER_END pid=5599 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:01.201374 systemd[1]: sshd@9-172.31.26.163:22-139.178.89.65:49818.service: Deactivated successfully. Dec 13 14:16:01.202799 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:16:01.197000 audit[5599]: CRED_DISP pid=5599 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:01.215665 kernel: audit: type=1106 audit(1734099361.193:443): pid=5599 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:01.215808 kernel: audit: type=1104 audit(1734099361.197:444): pid=5599 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:01.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.26.163:22-139.178.89.65:49818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:01.218521 systemd-logind[1829]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:16:01.223093 systemd[1]: Started sshd@10-172.31.26.163:22-139.178.89.65:49820.service. Dec 13 14:16:01.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.26.163:22-139.178.89.65:49820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:01.225696 systemd-logind[1829]: Removed session 10. Dec 13 14:16:01.392000 audit[5616]: USER_ACCT pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:01.394971 sshd[5616]: Accepted publickey for core from 139.178.89.65 port 49820 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:01.395000 audit[5616]: CRED_ACQ pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:01.395000 audit[5616]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea023720 a2=3 a3=1 items=0 ppid=1 pid=5616 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:01.395000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:01.398242 sshd[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:01.407589 systemd[1]: Started session-11.scope. Dec 13 14:16:01.409755 systemd-logind[1829]: New session 11 of user core. Dec 13 14:16:01.420000 audit[5616]: USER_START pid=5616 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:01.423000 audit[5619]: CRED_ACQ pid=5619 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:01.775861 sshd[5616]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:01.777000 audit[5616]: USER_END pid=5616 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:01.777000 audit[5616]: CRED_DISP pid=5616 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:01.782578 systemd-logind[1829]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:16:01.784261 systemd[1]: sshd@10-172.31.26.163:22-139.178.89.65:49820.service: Deactivated successfully. Dec 13 14:16:01.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.26.163:22-139.178.89.65:49820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:01.786878 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:16:01.789371 systemd-logind[1829]: Removed session 11. Dec 13 14:16:01.802557 systemd[1]: Started sshd@11-172.31.26.163:22-139.178.89.65:49822.service. Dec 13 14:16:01.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.26.163:22-139.178.89.65:49822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:01.985000 audit[5626]: USER_ACCT pid=5626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:01.987206 sshd[5626]: Accepted publickey for core from 139.178.89.65 port 49822 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:01.987000 audit[5626]: CRED_ACQ pid=5626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:01.987000 audit[5626]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff79f0b80 a2=3 a3=1 items=0 ppid=1 pid=5626 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:01.987000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:01.989854 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:01.999270 systemd-logind[1829]: New session 12 of user core. Dec 13 14:16:02.000509 systemd[1]: Started session-12.scope. Dec 13 14:16:02.015000 audit[5626]: USER_START pid=5626 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:02.018000 audit[5629]: CRED_ACQ pid=5629 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:02.259207 sshd[5626]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:02.259000 audit[5626]: USER_END pid=5626 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:02.259000 audit[5626]: CRED_DISP pid=5626 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:02.265679 systemd-logind[1829]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:16:02.268152 systemd[1]: sshd@11-172.31.26.163:22-139.178.89.65:49822.service: Deactivated successfully. Dec 13 14:16:02.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.26.163:22-139.178.89.65:49822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:02.269684 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:16:02.273376 systemd-logind[1829]: Removed session 12. Dec 13 14:16:02.954028 systemd[1]: run-containerd-runc-k8s.io-8e9e3442e910e220d8a9f2d36a9de38d4e0db1aa66f9bc651ffdead7b3187ced-runc.Y4HKEx.mount: Deactivated successfully. Dec 13 14:16:07.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.26.163:22-139.178.89.65:49838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:07.284361 systemd[1]: Started sshd@12-172.31.26.163:22-139.178.89.65:49838.service. Dec 13 14:16:07.286710 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 14:16:07.286803 kernel: audit: type=1130 audit(1734099367.283:464): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.26.163:22-139.178.89.65:49838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:07.456000 audit[5664]: USER_ACCT pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:07.467308 sshd[5664]: Accepted publickey for core from 139.178.89.65 port 49838 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:07.467845 kernel: audit: type=1101 audit(1734099367.456:465): pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:07.468000 audit[5664]: CRED_ACQ pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:07.471148 sshd[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:07.484064 kernel: audit: type=1103 audit(1734099367.468:466): pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:07.484204 kernel: audit: type=1006 audit(1734099367.468:467): pid=5664 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 13 14:16:07.468000 audit[5664]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd273c780 a2=3 a3=1 items=0 ppid=1 pid=5664 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:07.493796 kernel: audit: type=1300 audit(1734099367.468:467): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd273c780 a2=3 a3=1 items=0 ppid=1 pid=5664 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:07.468000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:07.499047 kernel: audit: type=1327 audit(1734099367.468:467): proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:07.503871 systemd-logind[1829]: New session 13 of user core. Dec 13 14:16:07.506834 systemd[1]: Started session-13.scope. Dec 13 14:16:07.515000 audit[5664]: USER_START pid=5664 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:07.528700 kernel: audit: type=1105 audit(1734099367.515:468): pid=5664 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:07.527000 audit[5667]: CRED_ACQ pid=5667 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:07.538793 kernel: audit: type=1103 audit(1734099367.527:469): pid=5667 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:07.773282 sshd[5664]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:07.773000 audit[5664]: USER_END pid=5664 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:07.775000 audit[5664]: CRED_DISP pid=5664 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:07.786795 systemd[1]: sshd@12-172.31.26.163:22-139.178.89.65:49838.service: Deactivated successfully. Dec 13 14:16:07.794720 kernel: audit: type=1106 audit(1734099367.773:470): pid=5664 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:07.794905 kernel: audit: type=1104 audit(1734099367.775:471): pid=5664 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:07.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.26.163:22-139.178.89.65:49838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:07.795975 systemd-logind[1829]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:16:07.796136 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:16:07.798826 systemd-logind[1829]: Removed session 13. Dec 13 14:16:12.798683 systemd[1]: Started sshd@13-172.31.26.163:22-139.178.89.65:39386.service. Dec 13 14:16:12.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.26.163:22-139.178.89.65:39386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:12.802659 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:16:12.802799 kernel: audit: type=1130 audit(1734099372.798:473): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.26.163:22-139.178.89.65:39386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:12.971000 audit[5684]: USER_ACCT pid=5684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:12.973070 sshd[5684]: Accepted publickey for core from 139.178.89.65 port 39386 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:12.983662 kernel: audit: type=1101 audit(1734099372.971:474): pid=5684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:12.982000 audit[5684]: CRED_ACQ pid=5684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:12.985085 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:13.001025 kernel: audit: type=1103 audit(1734099372.982:475): pid=5684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:13.001182 kernel: audit: type=1006 audit(1734099372.982:476): pid=5684 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 13 14:16:12.982000 audit[5684]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1ab9d20 a2=3 a3=1 items=0 ppid=1 pid=5684 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:13.013241 kernel: audit: type=1300 audit(1734099372.982:476): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1ab9d20 a2=3 a3=1 items=0 ppid=1 pid=5684 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:13.014845 systemd-logind[1829]: New session 14 of user core. Dec 13 14:16:12.982000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:13.016890 systemd[1]: Started session-14.scope. Dec 13 14:16:13.019745 kernel: audit: type=1327 audit(1734099372.982:476): proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:13.029000 audit[5684]: USER_START pid=5684 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:13.042000 audit[5687]: CRED_ACQ pid=5687 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:13.061679 kernel: audit: type=1105 audit(1734099373.029:477): pid=5684 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:13.061862 kernel: audit: type=1103 audit(1734099373.042:478): pid=5687 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:13.280980 sshd[5684]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:13.281000 audit[5684]: USER_END pid=5684 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:13.289501 systemd[1]: sshd@13-172.31.26.163:22-139.178.89.65:39386.service: Deactivated successfully. Dec 13 14:16:13.291121 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:16:13.295427 systemd-logind[1829]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:16:13.281000 audit[5684]: CRED_DISP pid=5684 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:13.305001 kernel: audit: type=1106 audit(1734099373.281:479): pid=5684 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:13.305154 kernel: audit: type=1104 audit(1734099373.281:480): pid=5684 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:13.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.26.163:22-139.178.89.65:39386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:13.306191 systemd-logind[1829]: Removed session 14. Dec 13 14:16:14.424471 kubelet[2999]: I1213 14:16:14.424433 2999 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:16:14.470265 kubelet[2999]: I1213 14:16:14.470221 2999 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-79sgh" podStartSLOduration=47.170410348 podStartE2EDuration="54.470164451s" podCreationTimestamp="2024-12-13 14:15:20 +0000 UTC" firstStartedPulling="2024-12-13 14:15:51.171163287 +0000 UTC m=+53.675769043" lastFinishedPulling="2024-12-13 14:15:58.47091739 +0000 UTC m=+60.975523146" observedRunningTime="2024-12-13 14:15:59.371871626 +0000 UTC m=+61.876477418" watchObservedRunningTime="2024-12-13 14:16:14.470164451 +0000 UTC m=+76.974770219" Dec 13 14:16:14.531000 audit[5699]: NETFILTER_CFG table=filter:119 family=2 entries=9 op=nft_register_rule pid=5699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:16:14.531000 audit[5699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=fffff9a835f0 a2=0 a3=1 items=0 ppid=3178 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:14.531000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:16:14.540000 audit[5699]: NETFILTER_CFG table=nat:120 family=2 entries=27 op=nft_register_chain pid=5699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:16:14.540000 audit[5699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=fffff9a835f0 a2=0 a3=1 items=0 ppid=3178 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:14.540000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:16:18.317379 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 14:16:18.317531 kernel: audit: type=1130 audit(1734099378.305:484): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.26.163:22-139.178.89.65:41766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:18.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.26.163:22-139.178.89.65:41766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:18.306593 systemd[1]: Started sshd@14-172.31.26.163:22-139.178.89.65:41766.service. Dec 13 14:16:18.477000 audit[5722]: USER_ACCT pid=5722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:18.486333 sshd[5722]: Accepted publickey for core from 139.178.89.65 port 41766 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:18.488686 kernel: audit: type=1101 audit(1734099378.477:485): pid=5722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:18.488000 audit[5722]: CRED_ACQ pid=5722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:18.493609 sshd[5722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:18.505939 kernel: audit: type=1103 audit(1734099378.488:486): pid=5722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:18.506083 kernel: audit: type=1006 audit(1734099378.491:487): pid=5722 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 13 14:16:18.491000 audit[5722]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeab92050 a2=3 a3=1 items=0 ppid=1 pid=5722 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:18.515835 kernel: audit: type=1300 audit(1734099378.491:487): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeab92050 a2=3 a3=1 items=0 ppid=1 pid=5722 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:18.491000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:18.520054 kernel: audit: type=1327 audit(1734099378.491:487): proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:18.524774 systemd-logind[1829]: New session 15 of user core. Dec 13 14:16:18.525731 systemd[1]: Started session-15.scope. Dec 13 14:16:18.544000 audit[5722]: USER_START pid=5722 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:18.556000 audit[5725]: CRED_ACQ pid=5725 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:18.570188 kernel: audit: type=1105 audit(1734099378.544:488): pid=5722 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:18.570287 kernel: audit: type=1103 audit(1734099378.556:489): pid=5725 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:18.795969 sshd[5722]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:18.796000 audit[5722]: USER_END pid=5722 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:18.810217 systemd[1]: sshd@14-172.31.26.163:22-139.178.89.65:41766.service: Deactivated successfully. Dec 13 14:16:18.812990 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:16:18.814113 systemd-logind[1829]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:16:18.796000 audit[5722]: CRED_DISP pid=5722 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:18.822968 kernel: audit: type=1106 audit(1734099378.796:490): pid=5722 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:18.823111 kernel: audit: type=1104 audit(1734099378.796:491): pid=5722 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:18.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.26.163:22-139.178.89.65:41766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:18.825380 systemd-logind[1829]: Removed session 15. Dec 13 14:16:23.822206 systemd[1]: Started sshd@15-172.31.26.163:22-139.178.89.65:41772.service. Dec 13 14:16:23.833325 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:16:23.833426 kernel: audit: type=1130 audit(1734099383.820:493): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.26.163:22-139.178.89.65:41772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:23.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.26.163:22-139.178.89.65:41772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:23.992124 sshd[5736]: Accepted publickey for core from 139.178.89.65 port 41772 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:23.990000 audit[5736]: USER_ACCT pid=5736 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:23.995896 sshd[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:23.991000 audit[5736]: CRED_ACQ pid=5736 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:24.011345 kernel: audit: type=1101 audit(1734099383.990:494): pid=5736 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:24.011486 kernel: audit: type=1103 audit(1734099383.991:495): pid=5736 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:24.017786 kernel: audit: type=1006 audit(1734099383.991:496): pid=5736 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 14:16:23.991000 audit[5736]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe187d470 a2=3 a3=1 items=0 ppid=1 pid=5736 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:24.027866 kernel: audit: type=1300 audit(1734099383.991:496): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe187d470 a2=3 a3=1 items=0 ppid=1 pid=5736 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:24.029942 kernel: audit: type=1327 audit(1734099383.991:496): proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:23.991000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:24.031754 systemd-logind[1829]: New session 16 of user core. Dec 13 14:16:24.034865 systemd[1]: Started session-16.scope. Dec 13 14:16:24.047000 audit[5736]: USER_START pid=5736 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:24.059000 audit[5739]: CRED_ACQ pid=5739 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:24.070097 kernel: audit: type=1105 audit(1734099384.047:497): pid=5736 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:24.070228 kernel: audit: type=1103 audit(1734099384.059:498): pid=5739 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:24.300984 sshd[5736]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:24.301000 audit[5736]: USER_END pid=5736 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:24.305688 systemd[1]: sshd@15-172.31.26.163:22-139.178.89.65:41772.service: Deactivated successfully. Dec 13 14:16:24.307144 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:16:24.301000 audit[5736]: CRED_DISP pid=5736 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:24.316578 systemd-logind[1829]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:16:24.318463 systemd-logind[1829]: Removed session 16. Dec 13 14:16:24.323943 kernel: audit: type=1106 audit(1734099384.301:499): pid=5736 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:24.324074 kernel: audit: type=1104 audit(1734099384.301:500): pid=5736 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:24.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.26.163:22-139.178.89.65:41772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:24.331447 systemd[1]: Started sshd@16-172.31.26.163:22-139.178.89.65:41774.service. Dec 13 14:16:24.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.26.163:22-139.178.89.65:41774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:24.512000 audit[5749]: USER_ACCT pid=5749 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:24.514461 sshd[5749]: Accepted publickey for core from 139.178.89.65 port 41774 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:24.513000 audit[5749]: CRED_ACQ pid=5749 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:24.514000 audit[5749]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcede6df0 a2=3 a3=1 items=0 ppid=1 pid=5749 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:24.514000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:24.517150 sshd[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:24.526430 systemd[1]: Started session-17.scope. Dec 13 14:16:24.528702 systemd-logind[1829]: New session 17 of user core. Dec 13 14:16:24.539000 audit[5749]: USER_START pid=5749 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:24.542000 audit[5752]: CRED_ACQ pid=5752 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:25.011688 sshd[5749]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:25.011000 audit[5749]: USER_END pid=5749 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:25.011000 audit[5749]: CRED_DISP pid=5749 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:25.016196 systemd[1]: sshd@16-172.31.26.163:22-139.178.89.65:41774.service: Deactivated successfully. Dec 13 14:16:25.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.26.163:22-139.178.89.65:41774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:25.018221 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:16:25.018895 systemd-logind[1829]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:16:25.021403 systemd-logind[1829]: Removed session 17. Dec 13 14:16:25.037970 systemd[1]: Started sshd@17-172.31.26.163:22-139.178.89.65:41788.service. Dec 13 14:16:25.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.26.163:22-139.178.89.65:41788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:25.215000 audit[5760]: USER_ACCT pid=5760 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:25.217541 sshd[5760]: Accepted publickey for core from 139.178.89.65 port 41788 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:25.217000 audit[5760]: CRED_ACQ pid=5760 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:25.217000 audit[5760]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff23134a0 a2=3 a3=1 items=0 ppid=1 pid=5760 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:25.217000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:25.220160 sshd[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:25.229764 systemd[1]: Started session-18.scope. Dec 13 14:16:25.232059 systemd-logind[1829]: New session 18 of user core. Dec 13 14:16:25.240000 audit[5760]: USER_START pid=5760 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:25.243000 audit[5763]: CRED_ACQ pid=5763 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:28.476000 audit[5779]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5779 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:16:28.476000 audit[5779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffd4491980 a2=0 a3=1 items=0 ppid=3178 pid=5779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:28.476000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:16:28.481000 audit[5779]: NETFILTER_CFG table=nat:122 family=2 entries=22 op=nft_register_rule pid=5779 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:16:28.481000 audit[5779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffd4491980 a2=0 a3=1 items=0 ppid=3178 pid=5779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:28.481000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:16:28.492399 sshd[5760]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:28.494000 audit[5760]: USER_END pid=5760 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:28.494000 audit[5760]: CRED_DISP pid=5760 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:28.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.26.163:22-139.178.89.65:41788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:28.498614 systemd[1]: sshd@17-172.31.26.163:22-139.178.89.65:41788.service: Deactivated successfully. Dec 13 14:16:28.500147 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:16:28.501771 systemd-logind[1829]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:16:28.509951 systemd-logind[1829]: Removed session 18. Dec 13 14:16:28.516516 systemd[1]: Started sshd@18-172.31.26.163:22-139.178.89.65:46884.service. Dec 13 14:16:28.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.26.163:22-139.178.89.65:46884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:28.561000 audit[5784]: NETFILTER_CFG table=filter:123 family=2 entries=32 op=nft_register_rule pid=5784 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:16:28.561000 audit[5784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffec8e98c0 a2=0 a3=1 items=0 ppid=3178 pid=5784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:28.561000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:16:28.570000 audit[5784]: NETFILTER_CFG table=nat:124 family=2 entries=22 op=nft_register_rule pid=5784 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:16:28.570000 audit[5784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffec8e98c0 a2=0 a3=1 items=0 ppid=3178 pid=5784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:28.570000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:16:28.718000 audit[5783]: USER_ACCT pid=5783 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:28.719583 sshd[5783]: Accepted publickey for core from 139.178.89.65 port 46884 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:28.721000 audit[5783]: CRED_ACQ pid=5783 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:28.721000 audit[5783]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffa002240 a2=3 a3=1 items=0 ppid=1 pid=5783 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:28.721000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:28.722926 sshd[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:28.734786 systemd[1]: Started session-19.scope. Dec 13 14:16:28.736798 systemd-logind[1829]: New session 19 of user core. Dec 13 14:16:28.752000 audit[5783]: USER_START pid=5783 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:28.756000 audit[5787]: CRED_ACQ pid=5787 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:29.332550 sshd[5783]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:29.347117 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 13 14:16:29.347298 kernel: audit: type=1106 audit(1734099389.334:530): pid=5783 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:29.334000 audit[5783]: USER_END pid=5783 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:29.337855 systemd[1]: sshd@18-172.31.26.163:22-139.178.89.65:46884.service: Deactivated successfully. Dec 13 14:16:29.339276 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:16:29.349180 systemd-logind[1829]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:16:29.334000 audit[5783]: CRED_DISP pid=5783 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:29.362650 kernel: audit: type=1104 audit(1734099389.334:531): pid=5783 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:29.355989 systemd-logind[1829]: Removed session 19. Dec 13 14:16:29.358911 systemd[1]: Started sshd@19-172.31.26.163:22-139.178.89.65:46898.service. Dec 13 14:16:29.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.26.163:22-139.178.89.65:46884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:29.377053 kernel: audit: type=1131 audit(1734099389.337:532): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.26.163:22-139.178.89.65:46884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:29.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.26.163:22-139.178.89.65:46898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:29.385345 kernel: audit: type=1130 audit(1734099389.358:533): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.26.163:22-139.178.89.65:46898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:29.538000 audit[5794]: USER_ACCT pid=5794 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:29.541081 sshd[5794]: Accepted publickey for core from 139.178.89.65 port 46898 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:29.544222 sshd[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:29.538000 audit[5794]: CRED_ACQ pid=5794 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:29.557847 kernel: audit: type=1101 audit(1734099389.538:534): pid=5794 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:29.557958 kernel: audit: type=1103 audit(1734099389.538:535): pid=5794 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:29.565143 kernel: audit: type=1006 audit(1734099389.538:536): pid=5794 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 13 14:16:29.538000 audit[5794]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcc327c20 a2=3 a3=1 items=0 ppid=1 pid=5794 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:29.578022 kernel: audit: type=1300 audit(1734099389.538:536): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcc327c20 a2=3 a3=1 items=0 ppid=1 pid=5794 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:29.578150 kernel: audit: type=1327 audit(1734099389.538:536): proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:29.538000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:29.580836 systemd[1]: Started session-20.scope. Dec 13 14:16:29.585965 systemd-logind[1829]: New session 20 of user core. Dec 13 14:16:29.599000 audit[5794]: USER_START pid=5794 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:29.602000 audit[5797]: CRED_ACQ pid=5797 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:29.611801 kernel: audit: type=1105 audit(1734099389.599:537): pid=5794 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:29.850028 sshd[5794]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:29.851000 audit[5794]: USER_END pid=5794 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:29.852000 audit[5794]: CRED_DISP pid=5794 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:29.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.26.163:22-139.178.89.65:46898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:29.856087 systemd[1]: sshd@19-172.31.26.163:22-139.178.89.65:46898.service: Deactivated successfully. Dec 13 14:16:29.858068 systemd-logind[1829]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:16:29.859202 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:16:29.861897 systemd-logind[1829]: Removed session 20. Dec 13 14:16:32.826708 kubelet[2999]: I1213 14:16:32.826658 2999 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:16:32.912000 audit[5810]: NETFILTER_CFG table=filter:125 family=2 entries=32 op=nft_register_rule pid=5810 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:16:32.912000 audit[5810]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=fffff05226d0 a2=0 a3=1 items=0 ppid=3178 pid=5810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:32.912000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:16:32.924000 audit[5810]: NETFILTER_CFG table=nat:126 family=2 entries=34 op=nft_register_chain pid=5810 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:16:32.924000 audit[5810]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11236 a0=3 a1=fffff05226d0 a2=0 a3=1 items=0 ppid=3178 pid=5810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:32.924000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:16:34.875833 systemd[1]: Started sshd@20-172.31.26.163:22-139.178.89.65:46906.service. Dec 13 14:16:34.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.26.163:22-139.178.89.65:46906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:34.884650 kernel: kauditd_printk_skb: 10 callbacks suppressed Dec 13 14:16:34.884805 kernel: audit: type=1130 audit(1734099394.876:544): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.26.163:22-139.178.89.65:46906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:35.042000 audit[5830]: USER_ACCT pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:35.043854 sshd[5830]: Accepted publickey for core from 139.178.89.65 port 46906 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:35.054691 kernel: audit: type=1101 audit(1734099395.042:545): pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:35.054000 audit[5830]: CRED_ACQ pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:35.055925 sshd[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:35.069866 kernel: audit: type=1103 audit(1734099395.054:546): pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:35.069972 kernel: audit: type=1006 audit(1734099395.054:547): pid=5830 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 13 14:16:35.054000 audit[5830]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc94fd9a0 a2=3 a3=1 items=0 ppid=1 pid=5830 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:35.080447 kernel: audit: type=1300 audit(1734099395.054:547): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc94fd9a0 a2=3 a3=1 items=0 ppid=1 pid=5830 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:35.054000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:35.084009 kernel: audit: type=1327 audit(1734099395.054:547): proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:35.087747 systemd-logind[1829]: New session 21 of user core. Dec 13 14:16:35.090402 systemd[1]: Started session-21.scope. Dec 13 14:16:35.101000 audit[5830]: USER_START pid=5830 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:35.113933 kernel: audit: type=1105 audit(1734099395.101:548): pid=5830 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:35.114059 kernel: audit: type=1103 audit(1734099395.113:549): pid=5833 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:35.113000 audit[5833]: CRED_ACQ pid=5833 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:35.351708 sshd[5830]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:35.352000 audit[5830]: USER_END pid=5830 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:35.353000 audit[5830]: CRED_DISP pid=5830 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:35.365341 systemd[1]: sshd@20-172.31.26.163:22-139.178.89.65:46906.service: Deactivated successfully. Dec 13 14:16:35.374382 kernel: audit: type=1106 audit(1734099395.352:550): pid=5830 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:35.374537 kernel: audit: type=1104 audit(1734099395.353:551): pid=5830 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:35.367010 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:16:35.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.26.163:22-139.178.89.65:46906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:35.376872 systemd-logind[1829]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:16:35.379602 systemd-logind[1829]: Removed session 21. Dec 13 14:16:37.230000 audit[5844]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:16:37.230000 audit[5844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd06d6c20 a2=0 a3=1 items=0 ppid=3178 pid=5844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:37.230000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:16:37.256000 audit[5844]: NETFILTER_CFG table=nat:128 family=2 entries=106 op=nft_register_chain pid=5844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:16:37.256000 audit[5844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffd06d6c20 a2=0 a3=1 items=0 ppid=3178 pid=5844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:37.256000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:16:38.143656 amazon-ssm-agent[1813]: 2024-12-13 14:16:38 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:16:40.380335 systemd[1]: Started sshd@21-172.31.26.163:22-139.178.89.65:54780.service. Dec 13 14:16:40.391086 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 14:16:40.391144 kernel: audit: type=1130 audit(1734099400.380:555): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.26.163:22-139.178.89.65:54780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:40.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.26.163:22-139.178.89.65:54780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:40.552000 audit[5847]: USER_ACCT pid=5847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:40.553347 sshd[5847]: Accepted publickey for core from 139.178.89.65 port 54780 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:40.564660 kernel: audit: type=1101 audit(1734099400.552:556): pid=5847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:40.564875 kernel: audit: type=1103 audit(1734099400.563:557): pid=5847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:40.563000 audit[5847]: CRED_ACQ pid=5847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:40.565899 sshd[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:40.573823 kernel: audit: type=1006 audit(1734099400.564:558): pid=5847 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 13 14:16:40.564000 audit[5847]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdc3e3c90 a2=3 a3=1 items=0 ppid=1 pid=5847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:40.589797 kernel: audit: type=1300 audit(1734099400.564:558): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdc3e3c90 a2=3 a3=1 items=0 ppid=1 pid=5847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:40.589925 kernel: audit: type=1327 audit(1734099400.564:558): proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:40.564000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:40.599561 systemd[1]: Started session-22.scope. Dec 13 14:16:40.600170 systemd-logind[1829]: New session 22 of user core. Dec 13 14:16:40.610000 audit[5847]: USER_START pid=5847 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:40.613000 audit[5850]: CRED_ACQ pid=5850 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:40.631263 kernel: audit: type=1105 audit(1734099400.610:559): pid=5847 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:40.631386 kernel: audit: type=1103 audit(1734099400.613:560): pid=5850 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:40.902022 sshd[5847]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:40.903000 audit[5847]: USER_END pid=5847 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:40.903000 audit[5847]: CRED_DISP pid=5847 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:40.927064 kernel: audit: type=1106 audit(1734099400.903:561): pid=5847 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:40.927213 kernel: audit: type=1104 audit(1734099400.903:562): pid=5847 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:40.918276 systemd[1]: sshd@21-172.31.26.163:22-139.178.89.65:54780.service: Deactivated successfully. Dec 13 14:16:40.919723 systemd-logind[1829]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:16:40.920378 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:16:40.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.26.163:22-139.178.89.65:54780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:40.931855 systemd-logind[1829]: Removed session 22. Dec 13 14:16:45.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.26.163:22-139.178.89.65:54794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:45.930275 systemd[1]: Started sshd@22-172.31.26.163:22-139.178.89.65:54794.service. Dec 13 14:16:45.935867 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:16:45.935998 kernel: audit: type=1130 audit(1734099405.929:564): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.26.163:22-139.178.89.65:54794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:46.108000 audit[5880]: USER_ACCT pid=5880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:46.110936 sshd[5880]: Accepted publickey for core from 139.178.89.65 port 54794 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:46.118327 sshd[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:46.115000 audit[5880]: CRED_ACQ pid=5880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:46.134792 kernel: audit: type=1101 audit(1734099406.108:565): pid=5880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:46.134914 kernel: audit: type=1103 audit(1734099406.115:566): pid=5880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:46.143552 systemd[1]: Started session-23.scope. Dec 13 14:16:46.145574 systemd-logind[1829]: New session 23 of user core. Dec 13 14:16:46.166061 kernel: audit: type=1006 audit(1734099406.115:567): pid=5880 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 13 14:16:46.115000 audit[5880]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff086dea0 a2=3 a3=1 items=0 ppid=1 pid=5880 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:46.193649 kernel: audit: type=1300 audit(1734099406.115:567): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff086dea0 a2=3 a3=1 items=0 ppid=1 pid=5880 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:46.193767 kernel: audit: type=1327 audit(1734099406.115:567): proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:46.193810 kernel: audit: type=1105 audit(1734099406.157:568): pid=5880 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:46.115000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:46.157000 audit[5880]: USER_START pid=5880 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:46.208088 kernel: audit: type=1103 audit(1734099406.165:569): pid=5889 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:46.165000 audit[5889]: CRED_ACQ pid=5889 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:46.223419 systemd[1]: run-containerd-runc-k8s.io-aebae663922ce25f57c507da18de16c321c4c3e464506af3b001c3edf0ce78df-runc.pJGLPd.mount: Deactivated successfully. Dec 13 14:16:46.489914 sshd[5880]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:46.492000 audit[5880]: USER_END pid=5880 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:46.497786 systemd-logind[1829]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:16:46.501369 systemd[1]: sshd@22-172.31.26.163:22-139.178.89.65:54794.service: Deactivated successfully. Dec 13 14:16:46.502936 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:16:46.505796 systemd-logind[1829]: Removed session 23. Dec 13 14:16:46.492000 audit[5880]: CRED_DISP pid=5880 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:46.516568 kernel: audit: type=1106 audit(1734099406.492:570): pid=5880 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:46.516705 kernel: audit: type=1104 audit(1734099406.492:571): pid=5880 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:46.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.26.163:22-139.178.89.65:54794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:51.517286 systemd[1]: Started sshd@23-172.31.26.163:22-139.178.89.65:56202.service. Dec 13 14:16:51.528167 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:16:51.528287 kernel: audit: type=1130 audit(1734099411.517:573): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.26.163:22-139.178.89.65:56202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:51.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.26.163:22-139.178.89.65:56202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:51.684000 audit[5913]: USER_ACCT pid=5913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:51.687964 sshd[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:51.695378 sshd[5913]: Accepted publickey for core from 139.178.89.65 port 56202 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:51.686000 audit[5913]: CRED_ACQ pid=5913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:51.706469 kernel: audit: type=1101 audit(1734099411.684:574): pid=5913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:51.706571 kernel: audit: type=1103 audit(1734099411.686:575): pid=5913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:51.716010 kernel: audit: type=1006 audit(1734099411.686:576): pid=5913 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 13 14:16:51.686000 audit[5913]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff5968d30 a2=3 a3=1 items=0 ppid=1 pid=5913 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:51.726201 kernel: audit: type=1300 audit(1734099411.686:576): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff5968d30 a2=3 a3=1 items=0 ppid=1 pid=5913 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:51.686000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:51.731489 kernel: audit: type=1327 audit(1734099411.686:576): proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:51.731089 systemd-logind[1829]: New session 24 of user core. Dec 13 14:16:51.733074 systemd[1]: Started session-24.scope. Dec 13 14:16:51.744000 audit[5913]: USER_START pid=5913 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:51.756000 audit[5916]: CRED_ACQ pid=5916 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:51.765964 kernel: audit: type=1105 audit(1734099411.744:577): pid=5913 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:51.766071 kernel: audit: type=1103 audit(1734099411.756:578): pid=5916 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:52.015961 sshd[5913]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:52.018000 audit[5913]: USER_END pid=5913 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:52.018000 audit[5913]: CRED_DISP pid=5913 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:52.030295 systemd[1]: sshd@23-172.31.26.163:22-139.178.89.65:56202.service: Deactivated successfully. Dec 13 14:16:52.031561 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:16:52.034446 systemd-logind[1829]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:16:52.036499 systemd-logind[1829]: Removed session 24. Dec 13 14:16:52.038078 kernel: audit: type=1106 audit(1734099412.018:579): pid=5913 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:52.038216 kernel: audit: type=1104 audit(1734099412.018:580): pid=5913 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:52.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.26.163:22-139.178.89.65:56202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:57.041012 systemd[1]: Started sshd@24-172.31.26.163:22-139.178.89.65:56208.service. Dec 13 14:16:57.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.26.163:22-139.178.89.65:56208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:57.045698 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:16:57.045832 kernel: audit: type=1130 audit(1734099417.041:582): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.26.163:22-139.178.89.65:56208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:16:57.215000 audit[5925]: USER_ACCT pid=5925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:57.216747 sshd[5925]: Accepted publickey for core from 139.178.89.65 port 56208 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:57.227751 kernel: audit: type=1101 audit(1734099417.215:583): pid=5925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:57.227865 kernel: audit: type=1103 audit(1734099417.226:584): pid=5925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:57.226000 audit[5925]: CRED_ACQ pid=5925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:57.229072 sshd[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:57.238880 systemd[1]: Started session-25.scope. Dec 13 14:16:57.240464 systemd-logind[1829]: New session 25 of user core. Dec 13 14:16:57.242347 kernel: audit: type=1006 audit(1734099417.227:585): pid=5925 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 13 14:16:57.227000 audit[5925]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff695ad50 a2=3 a3=1 items=0 ppid=1 pid=5925 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:57.256745 kernel: audit: type=1300 audit(1734099417.227:585): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff695ad50 a2=3 a3=1 items=0 ppid=1 pid=5925 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:16:57.227000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:57.262052 kernel: audit: type=1327 audit(1734099417.227:585): proctitle=737368643A20636F7265205B707269765D Dec 13 14:16:57.259000 audit[5925]: USER_START pid=5925 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:57.273000 audit[5928]: CRED_ACQ pid=5928 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:57.282752 kernel: audit: type=1105 audit(1734099417.259:586): pid=5925 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:57.282820 kernel: audit: type=1103 audit(1734099417.273:587): pid=5928 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:57.508284 sshd[5925]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:57.509000 audit[5925]: USER_END pid=5925 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:57.513394 systemd-logind[1829]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:16:57.514927 systemd[1]: sshd@24-172.31.26.163:22-139.178.89.65:56208.service: Deactivated successfully. Dec 13 14:16:57.516294 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:16:57.518491 systemd-logind[1829]: Removed session 25. Dec 13 14:16:57.509000 audit[5925]: CRED_DISP pid=5925 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:57.532164 kernel: audit: type=1106 audit(1734099417.509:588): pid=5925 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:57.532327 kernel: audit: type=1104 audit(1734099417.509:589): pid=5925 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:16:57.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.26.163:22-139.178.89.65:56208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:17:02.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.26.163:22-139.178.89.65:60134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:17:02.538941 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:17:02.539029 kernel: audit: type=1130 audit(1734099422.535:591): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.26.163:22-139.178.89.65:60134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:17:02.536655 systemd[1]: Started sshd@25-172.31.26.163:22-139.178.89.65:60134.service. Dec 13 14:17:02.712000 audit[5940]: USER_ACCT pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:02.714524 sshd[5940]: Accepted publickey for core from 139.178.89.65 port 60134 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:17:02.724666 kernel: audit: type=1101 audit(1734099422.712:592): pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:02.723000 audit[5940]: CRED_ACQ pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:02.726799 sshd[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:17:02.740830 kernel: audit: type=1103 audit(1734099422.723:593): pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:02.740963 kernel: audit: type=1006 audit(1734099422.724:594): pid=5940 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 13 14:17:02.724000 audit[5940]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff3e04d40 a2=3 a3=1 items=0 ppid=1 pid=5940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:17:02.750906 kernel: audit: type=1300 audit(1734099422.724:594): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff3e04d40 a2=3 a3=1 items=0 ppid=1 pid=5940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:17:02.724000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:17:02.754251 kernel: audit: type=1327 audit(1734099422.724:594): proctitle=737368643A20636F7265205B707269765D Dec 13 14:17:02.757155 systemd-logind[1829]: New session 26 of user core. Dec 13 14:17:02.759502 systemd[1]: Started session-26.scope. Dec 13 14:17:02.769000 audit[5940]: USER_START pid=5940 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:02.780000 audit[5943]: CRED_ACQ pid=5943 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:02.791014 kernel: audit: type=1105 audit(1734099422.769:595): pid=5940 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:02.791144 kernel: audit: type=1103 audit(1734099422.780:596): pid=5943 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:03.071974 sshd[5940]: pam_unix(sshd:session): session closed for user core Dec 13 14:17:03.072000 audit[5940]: USER_END pid=5940 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:03.075000 audit[5940]: CRED_DISP pid=5940 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:03.085403 systemd[1]: sshd@25-172.31.26.163:22-139.178.89.65:60134.service: Deactivated successfully. Dec 13 14:17:03.088314 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:17:03.093773 kernel: audit: type=1106 audit(1734099423.072:597): pid=5940 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:03.093945 kernel: audit: type=1104 audit(1734099423.075:598): pid=5940 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:03.093947 systemd-logind[1829]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:17:03.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.26.163:22-139.178.89.65:60134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:17:03.096006 systemd-logind[1829]: Removed session 26. Dec 13 14:17:08.097686 systemd[1]: Started sshd@26-172.31.26.163:22-139.178.89.65:34328.service. Dec 13 14:17:08.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.26.163:22-139.178.89.65:34328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:17:08.099845 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:17:08.099981 kernel: audit: type=1130 audit(1734099428.097:600): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.26.163:22-139.178.89.65:34328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:17:08.275000 audit[5978]: USER_ACCT pid=5978 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:08.277006 sshd[5978]: Accepted publickey for core from 139.178.89.65 port 34328 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:17:08.288000 audit[5978]: CRED_ACQ pid=5978 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:08.289858 sshd[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:17:08.298383 kernel: audit: type=1101 audit(1734099428.275:601): pid=5978 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:08.298592 kernel: audit: type=1103 audit(1734099428.288:602): pid=5978 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:08.306336 kernel: audit: type=1006 audit(1734099428.288:603): pid=5978 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 13 14:17:08.306438 kernel: audit: type=1300 audit(1734099428.288:603): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe0ac2010 a2=3 a3=1 items=0 ppid=1 pid=5978 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:17:08.288000 audit[5978]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe0ac2010 a2=3 a3=1 items=0 ppid=1 pid=5978 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:17:08.310370 systemd[1]: Started session-27.scope. Dec 13 14:17:08.312140 systemd-logind[1829]: New session 27 of user core. Dec 13 14:17:08.288000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:17:08.323933 kernel: audit: type=1327 audit(1734099428.288:603): proctitle=737368643A20636F7265205B707269765D Dec 13 14:17:08.333000 audit[5978]: USER_START pid=5978 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:08.337000 audit[5982]: CRED_ACQ pid=5982 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:08.354173 kernel: audit: type=1105 audit(1734099428.333:604): pid=5978 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:08.354296 kernel: audit: type=1103 audit(1734099428.337:605): pid=5982 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:08.574682 sshd[5978]: pam_unix(sshd:session): session closed for user core Dec 13 14:17:08.575000 audit[5978]: USER_END pid=5978 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:08.581139 systemd-logind[1829]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:17:08.583795 systemd[1]: sshd@26-172.31.26.163:22-139.178.89.65:34328.service: Deactivated successfully. Dec 13 14:17:08.585257 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:17:08.588573 systemd-logind[1829]: Removed session 27. Dec 13 14:17:08.577000 audit[5978]: CRED_DISP pid=5978 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:08.598142 kernel: audit: type=1106 audit(1734099428.575:606): pid=5978 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:08.598259 kernel: audit: type=1104 audit(1734099428.577:607): pid=5978 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:17:08.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.26.163:22-139.178.89.65:34328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:17:21.863405 env[1837]: time="2024-12-13T14:17:21.862561994Z" level=info msg="shim disconnected" id=1710dccdff80addd75f8f23289752d3de778549cf515ccddd4cb48ee46663674 Dec 13 14:17:21.864208 env[1837]: time="2024-12-13T14:17:21.864143842Z" level=warning msg="cleaning up after shim disconnected" id=1710dccdff80addd75f8f23289752d3de778549cf515ccddd4cb48ee46663674 namespace=k8s.io Dec 13 14:17:21.864434 env[1837]: time="2024-12-13T14:17:21.864405139Z" level=info msg="cleaning up dead shim" Dec 13 14:17:21.871985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1710dccdff80addd75f8f23289752d3de778549cf515ccddd4cb48ee46663674-rootfs.mount: Deactivated successfully. Dec 13 14:17:21.883842 env[1837]: time="2024-12-13T14:17:21.883785789Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6045 runtime=io.containerd.runc.v2\n" Dec 13 14:17:22.603509 kubelet[2999]: I1213 14:17:22.603446 2999 scope.go:117] "RemoveContainer" containerID="1710dccdff80addd75f8f23289752d3de778549cf515ccddd4cb48ee46663674" Dec 13 14:17:22.606811 env[1837]: time="2024-12-13T14:17:22.606746741Z" level=info msg="CreateContainer within sandbox \"6e1aae40d73f3a75a6140988fa7b085d96ca0a237190298062117d2b0dfff84d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 13 14:17:22.642294 env[1837]: time="2024-12-13T14:17:22.642228220Z" level=info msg="CreateContainer within sandbox \"6e1aae40d73f3a75a6140988fa7b085d96ca0a237190298062117d2b0dfff84d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e762d2a1ef1742574ee0b94b4af5d672ba9ffa37cfc433ffecd93fa78858dafc\"" Dec 13 14:17:22.643405 env[1837]: time="2024-12-13T14:17:22.643341094Z" level=info msg="StartContainer for \"e762d2a1ef1742574ee0b94b4af5d672ba9ffa37cfc433ffecd93fa78858dafc\"" Dec 13 14:17:22.756042 env[1837]: time="2024-12-13T14:17:22.755954466Z" level=info msg="StartContainer for \"e762d2a1ef1742574ee0b94b4af5d672ba9ffa37cfc433ffecd93fa78858dafc\" returns successfully" Dec 13 14:17:23.014187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a65327b1ad519891d82cb157f7c05d3b20566a11d6dead32e25adb40186e3a9-rootfs.mount: Deactivated successfully. Dec 13 14:17:23.017336 env[1837]: time="2024-12-13T14:17:23.017272295Z" level=info msg="shim disconnected" id=0a65327b1ad519891d82cb157f7c05d3b20566a11d6dead32e25adb40186e3a9 Dec 13 14:17:23.017927 env[1837]: time="2024-12-13T14:17:23.017346893Z" level=warning msg="cleaning up after shim disconnected" id=0a65327b1ad519891d82cb157f7c05d3b20566a11d6dead32e25adb40186e3a9 namespace=k8s.io Dec 13 14:17:23.017927 env[1837]: time="2024-12-13T14:17:23.017369491Z" level=info msg="cleaning up dead shim" Dec 13 14:17:23.032716 env[1837]: time="2024-12-13T14:17:23.032649836Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6110 runtime=io.containerd.runc.v2\n" Dec 13 14:17:23.613196 kubelet[2999]: I1213 14:17:23.613152 2999 scope.go:117] "RemoveContainer" containerID="0a65327b1ad519891d82cb157f7c05d3b20566a11d6dead32e25adb40186e3a9" Dec 13 14:17:23.618969 env[1837]: time="2024-12-13T14:17:23.618913385Z" level=info msg="CreateContainer within sandbox \"3225b87fc3597b24d12ae9d60b82b2fe2331956d6c27d984fed8ecd8729e5c13\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 14:17:23.652053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3599100237.mount: Deactivated successfully. Dec 13 14:17:23.655387 env[1837]: time="2024-12-13T14:17:23.655328730Z" level=info msg="CreateContainer within sandbox \"3225b87fc3597b24d12ae9d60b82b2fe2331956d6c27d984fed8ecd8729e5c13\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0a8147b8c7bed3d2fd2b2643383c226dd41c320ec983a0ea24a14f5c0b10c4ce\"" Dec 13 14:17:23.657019 env[1837]: time="2024-12-13T14:17:23.656940450Z" level=info msg="StartContainer for \"0a8147b8c7bed3d2fd2b2643383c226dd41c320ec983a0ea24a14f5c0b10c4ce\"" Dec 13 14:17:23.664954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224068298.mount: Deactivated successfully. Dec 13 14:17:23.783335 env[1837]: time="2024-12-13T14:17:23.783266524Z" level=info msg="StartContainer for \"0a8147b8c7bed3d2fd2b2643383c226dd41c320ec983a0ea24a14f5c0b10c4ce\" returns successfully" Dec 13 14:17:27.494429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c04699f4541468a76420c69670190d96234fe42354fe990aeecd2c63982a8ee3-rootfs.mount: Deactivated successfully. Dec 13 14:17:27.498128 env[1837]: time="2024-12-13T14:17:27.498055886Z" level=info msg="shim disconnected" id=c04699f4541468a76420c69670190d96234fe42354fe990aeecd2c63982a8ee3 Dec 13 14:17:27.498752 env[1837]: time="2024-12-13T14:17:27.498133772Z" level=warning msg="cleaning up after shim disconnected" id=c04699f4541468a76420c69670190d96234fe42354fe990aeecd2c63982a8ee3 namespace=k8s.io Dec 13 14:17:27.498752 env[1837]: time="2024-12-13T14:17:27.498158674Z" level=info msg="cleaning up dead shim" Dec 13 14:17:27.512668 env[1837]: time="2024-12-13T14:17:27.512561151Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6174 runtime=io.containerd.runc.v2\n" Dec 13 14:17:27.630446 kubelet[2999]: I1213 14:17:27.630385 2999 scope.go:117] "RemoveContainer" containerID="c04699f4541468a76420c69670190d96234fe42354fe990aeecd2c63982a8ee3" Dec 13 14:17:27.634896 env[1837]: time="2024-12-13T14:17:27.634837743Z" level=info msg="CreateContainer within sandbox \"4081c058d68d99a993eed0316727ef9208d5f0c7f5f3936ab92b5929d0478ba4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 14:17:27.662244 env[1837]: time="2024-12-13T14:17:27.662180880Z" level=info msg="CreateContainer within sandbox \"4081c058d68d99a993eed0316727ef9208d5f0c7f5f3936ab92b5929d0478ba4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d5f7d1e794da87853bdc938bc3cbc34f352e1922a4979b7973adbe1d12665748\"" Dec 13 14:17:27.663267 env[1837]: time="2024-12-13T14:17:27.663221138Z" level=info msg="StartContainer for \"d5f7d1e794da87853bdc938bc3cbc34f352e1922a4979b7973adbe1d12665748\"" Dec 13 14:17:27.808400 env[1837]: time="2024-12-13T14:17:27.806510798Z" level=info msg="StartContainer for \"d5f7d1e794da87853bdc938bc3cbc34f352e1922a4979b7973adbe1d12665748\" returns successfully" Dec 13 14:17:30.658374 kubelet[2999]: E1213 14:17:30.658308 2999 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-163?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 13 14:17:40.658973 kubelet[2999]: E1213 14:17:40.658914 2999 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-163?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"