Feb 9 18:59:47.189531 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 18:59:47.189565 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:59:47.189580 kernel: BIOS-provided physical RAM map: Feb 9 18:59:47.189591 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 9 18:59:47.189602 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 9 18:59:47.189613 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 9 18:59:47.189629 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 9 18:59:47.189641 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 9 18:59:47.189652 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 9 18:59:47.189663 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 9 18:59:47.189675 kernel: NX (Execute Disable) protection: active Feb 9 18:59:47.189686 kernel: SMBIOS 2.7 present. Feb 9 18:59:47.189697 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 9 18:59:47.189709 kernel: Hypervisor detected: KVM Feb 9 18:59:47.189726 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 18:59:47.189739 kernel: kvm-clock: cpu 0, msr 2ffaa001, primary cpu clock Feb 9 18:59:47.189751 kernel: kvm-clock: using sched offset of 7054890538 cycles Feb 9 18:59:47.189765 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 18:59:47.189778 kernel: tsc: Detected 2500.004 MHz processor Feb 9 18:59:47.189790 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 18:59:47.189807 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 18:59:47.189833 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 9 18:59:47.189844 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 18:59:47.189855 kernel: Using GB pages for direct mapping Feb 9 18:59:47.189865 kernel: ACPI: Early table checksum verification disabled Feb 9 18:59:47.189875 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 9 18:59:47.189887 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 9 18:59:47.189899 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 18:59:47.189910 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 9 18:59:47.189925 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 9 18:59:47.189937 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 9 18:59:47.189949 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 18:59:47.189961 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 9 18:59:47.189974 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 18:59:47.189986 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 9 18:59:47.189999 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 9 18:59:47.190010 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 9 18:59:47.190024 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 9 18:59:47.190036 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 9 18:59:47.190047 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 9 18:59:47.190063 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 9 18:59:47.190076 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 9 18:59:47.190088 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 9 18:59:47.190101 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 9 18:59:47.190117 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 9 18:59:47.190130 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 9 18:59:47.190144 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 9 18:59:47.190158 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 18:59:47.190171 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 18:59:47.190185 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 9 18:59:47.190199 kernel: NUMA: Initialized distance table, cnt=1 Feb 9 18:59:47.190213 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 9 18:59:47.190229 kernel: Zone ranges: Feb 9 18:59:47.190299 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 18:59:47.190314 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 9 18:59:47.190327 kernel: Normal empty Feb 9 18:59:47.190341 kernel: Movable zone start for each node Feb 9 18:59:47.190383 kernel: Early memory node ranges Feb 9 18:59:47.190396 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 9 18:59:47.190409 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 9 18:59:47.190423 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 9 18:59:47.190466 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 18:59:47.190481 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 9 18:59:47.190493 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 9 18:59:47.190507 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 18:59:47.190520 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 18:59:47.190559 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 9 18:59:47.190573 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 18:59:47.190587 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 18:59:47.190601 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 18:59:47.190644 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 18:59:47.190658 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 18:59:47.190672 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 18:59:47.190686 kernel: TSC deadline timer available Feb 9 18:59:47.191188 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 18:59:47.191207 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 9 18:59:47.191221 kernel: Booting paravirtualized kernel on KVM Feb 9 18:59:47.191235 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 18:59:47.191249 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 18:59:47.191268 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 18:59:47.191288 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 18:59:47.191300 kernel: pcpu-alloc: [0] 0 1 Feb 9 18:59:47.191312 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Feb 9 18:59:47.191326 kernel: kvm-guest: PV spinlocks enabled Feb 9 18:59:47.191339 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 18:59:47.191352 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 9 18:59:47.191366 kernel: Policy zone: DMA32 Feb 9 18:59:47.191383 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:59:47.191401 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:59:47.191415 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:59:47.191429 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 18:59:47.191443 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:59:47.191458 kernel: Memory: 1936476K/2057760K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121024K reserved, 0K cma-reserved) Feb 9 18:59:47.191473 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 18:59:47.191487 kernel: Kernel/User page tables isolation: enabled Feb 9 18:59:47.191501 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 18:59:47.191518 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 18:59:47.191532 kernel: rcu: Hierarchical RCU implementation. Feb 9 18:59:47.191547 kernel: rcu: RCU event tracing is enabled. Feb 9 18:59:47.191562 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 18:59:47.191613 kernel: Rude variant of Tasks RCU enabled. Feb 9 18:59:47.191630 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:59:47.191674 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:59:47.191689 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 18:59:47.191704 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 9 18:59:47.192016 kernel: random: crng init done Feb 9 18:59:47.192035 kernel: Console: colour VGA+ 80x25 Feb 9 18:59:47.192049 kernel: printk: console [ttyS0] enabled Feb 9 18:59:47.192064 kernel: ACPI: Core revision 20210730 Feb 9 18:59:47.192078 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 9 18:59:47.192093 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 18:59:47.192107 kernel: x2apic enabled Feb 9 18:59:47.192122 kernel: Switched APIC routing to physical x2apic. Feb 9 18:59:47.192135 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Feb 9 18:59:47.192152 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Feb 9 18:59:47.192165 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 18:59:47.192178 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 18:59:47.192193 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 18:59:47.192217 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 18:59:47.192235 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 18:59:47.192249 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 18:59:47.192265 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 18:59:47.192280 kernel: RETBleed: Vulnerable Feb 9 18:59:47.192295 kernel: Speculative Store Bypass: Vulnerable Feb 9 18:59:47.192310 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 18:59:47.192325 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 18:59:47.192339 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 18:59:47.192354 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 18:59:47.192372 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 18:59:47.192387 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 18:59:47.192402 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 18:59:47.192417 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 18:59:47.192432 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 18:59:47.192449 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 18:59:47.192464 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 18:59:47.192479 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 9 18:59:47.192494 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 18:59:47.192509 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 18:59:47.192524 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 18:59:47.192538 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 9 18:59:47.192553 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 9 18:59:47.192568 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 9 18:59:47.192583 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 9 18:59:47.192636 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 9 18:59:47.192653 kernel: Freeing SMP alternatives memory: 32K Feb 9 18:59:47.192671 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:59:47.192685 kernel: LSM: Security Framework initializing Feb 9 18:59:47.192700 kernel: SELinux: Initializing. Feb 9 18:59:47.192715 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 18:59:47.192730 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 18:59:47.192746 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 18:59:47.192761 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 18:59:47.192777 kernel: signal: max sigframe size: 3632 Feb 9 18:59:47.192792 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:59:47.192807 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 18:59:47.192839 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:59:47.192850 kernel: x86: Booting SMP configuration: Feb 9 18:59:47.192862 kernel: .... node #0, CPUs: #1 Feb 9 18:59:47.192875 kernel: kvm-clock: cpu 1, msr 2ffaa041, secondary cpu clock Feb 9 18:59:47.192888 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Feb 9 18:59:47.192902 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 9 18:59:47.192916 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 18:59:47.192929 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 18:59:47.192942 kernel: smpboot: Max logical packages: 1 Feb 9 18:59:47.192958 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Feb 9 18:59:47.192971 kernel: devtmpfs: initialized Feb 9 18:59:47.192984 kernel: x86/mm: Memory block size: 128MB Feb 9 18:59:47.192997 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:59:47.193010 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 18:59:47.193023 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:59:47.193037 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:59:47.193050 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:59:47.193064 kernel: audit: type=2000 audit(1707505186.170:1): state=initialized audit_enabled=0 res=1 Feb 9 18:59:47.193080 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:59:47.193093 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 18:59:47.193107 kernel: cpuidle: using governor menu Feb 9 18:59:47.193120 kernel: ACPI: bus type PCI registered Feb 9 18:59:47.193133 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:59:47.193145 kernel: dca service started, version 1.12.1 Feb 9 18:59:47.193158 kernel: PCI: Using configuration type 1 for base access Feb 9 18:59:47.193171 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 18:59:47.193184 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:59:47.193200 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:59:47.193213 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:59:47.193226 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:59:47.193240 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:59:47.193253 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:59:47.193266 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:59:47.193278 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:59:47.193290 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:59:47.193303 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 9 18:59:47.193318 kernel: ACPI: Interpreter enabled Feb 9 18:59:47.193329 kernel: ACPI: PM: (supports S0 S5) Feb 9 18:59:47.193341 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 18:59:47.193354 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 18:59:47.193367 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 9 18:59:47.193381 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 18:59:47.193582 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 9 18:59:47.193705 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 9 18:59:47.193726 kernel: acpiphp: Slot [3] registered Feb 9 18:59:47.193739 kernel: acpiphp: Slot [4] registered Feb 9 18:59:47.193752 kernel: acpiphp: Slot [5] registered Feb 9 18:59:47.193764 kernel: acpiphp: Slot [6] registered Feb 9 18:59:47.193777 kernel: acpiphp: Slot [7] registered Feb 9 18:59:47.193789 kernel: acpiphp: Slot [8] registered Feb 9 18:59:47.193802 kernel: acpiphp: Slot [9] registered Feb 9 18:59:47.193838 kernel: acpiphp: Slot [10] registered Feb 9 18:59:47.193902 kernel: acpiphp: Slot [11] registered Feb 9 18:59:47.193924 kernel: acpiphp: Slot [12] registered Feb 9 18:59:47.193938 kernel: acpiphp: Slot [13] registered Feb 9 18:59:47.193951 kernel: acpiphp: Slot [14] registered Feb 9 18:59:47.193965 kernel: acpiphp: Slot [15] registered Feb 9 18:59:47.193978 kernel: acpiphp: Slot [16] registered Feb 9 18:59:47.193992 kernel: acpiphp: Slot [17] registered Feb 9 18:59:47.194005 kernel: acpiphp: Slot [18] registered Feb 9 18:59:47.194018 kernel: acpiphp: Slot [19] registered Feb 9 18:59:47.194031 kernel: acpiphp: Slot [20] registered Feb 9 18:59:47.194046 kernel: acpiphp: Slot [21] registered Feb 9 18:59:47.194059 kernel: acpiphp: Slot [22] registered Feb 9 18:59:47.194072 kernel: acpiphp: Slot [23] registered Feb 9 18:59:47.194085 kernel: acpiphp: Slot [24] registered Feb 9 18:59:47.194098 kernel: acpiphp: Slot [25] registered Feb 9 18:59:47.194112 kernel: acpiphp: Slot [26] registered Feb 9 18:59:47.194125 kernel: acpiphp: Slot [27] registered Feb 9 18:59:47.194139 kernel: acpiphp: Slot [28] registered Feb 9 18:59:47.194152 kernel: acpiphp: Slot [29] registered Feb 9 18:59:47.194165 kernel: acpiphp: Slot [30] registered Feb 9 18:59:47.194181 kernel: acpiphp: Slot [31] registered Feb 9 18:59:47.194194 kernel: PCI host bridge to bus 0000:00 Feb 9 18:59:47.194328 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 18:59:47.194485 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 18:59:47.194702 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 18:59:47.194898 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 9 18:59:47.195146 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 18:59:47.195306 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 18:59:47.195439 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 18:59:47.195565 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 9 18:59:47.195687 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 18:59:47.195805 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 9 18:59:47.195947 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 9 18:59:47.196073 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 9 18:59:47.196202 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 9 18:59:47.196327 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 9 18:59:47.196450 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 9 18:59:47.196573 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 9 18:59:47.196696 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 10742 usecs Feb 9 18:59:47.196932 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 9 18:59:47.197062 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 9 18:59:47.197193 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 9 18:59:47.197317 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 18:59:47.197448 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 18:59:47.197576 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 9 18:59:47.197711 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 18:59:47.197852 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 9 18:59:47.197876 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 18:59:47.197891 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 18:59:47.197906 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 18:59:47.197920 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 18:59:47.197936 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 18:59:47.197950 kernel: iommu: Default domain type: Translated Feb 9 18:59:47.197965 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 18:59:47.198090 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 9 18:59:47.198219 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 18:59:47.198364 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 9 18:59:47.198383 kernel: vgaarb: loaded Feb 9 18:59:47.198399 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:59:47.198412 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:59:47.198426 kernel: PTP clock support registered Feb 9 18:59:47.198440 kernel: PCI: Using ACPI for IRQ routing Feb 9 18:59:47.198455 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 18:59:47.198470 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 9 18:59:47.198488 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 9 18:59:47.198502 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 9 18:59:47.198518 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 9 18:59:47.198533 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 18:59:47.198547 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:59:47.198562 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:59:47.198575 kernel: pnp: PnP ACPI init Feb 9 18:59:47.198590 kernel: pnp: PnP ACPI: found 5 devices Feb 9 18:59:47.198605 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 18:59:47.198622 kernel: NET: Registered PF_INET protocol family Feb 9 18:59:47.198637 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:59:47.198652 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 18:59:47.198667 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:59:47.198682 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 18:59:47.198697 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 18:59:47.198712 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 18:59:47.198727 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 18:59:47.198742 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 18:59:47.198760 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:59:47.198775 kernel: NET: Registered PF_XDP protocol family Feb 9 18:59:47.198908 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 18:59:47.199024 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 18:59:47.199136 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 18:59:47.199249 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 9 18:59:47.199388 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 18:59:47.199520 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 18:59:47.199542 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:59:47.199558 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 18:59:47.199573 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Feb 9 18:59:47.199589 kernel: clocksource: Switched to clocksource tsc Feb 9 18:59:47.199604 kernel: Initialise system trusted keyrings Feb 9 18:59:47.199619 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 18:59:47.199634 kernel: Key type asymmetric registered Feb 9 18:59:47.199649 kernel: Asymmetric key parser 'x509' registered Feb 9 18:59:47.199667 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:59:47.199681 kernel: io scheduler mq-deadline registered Feb 9 18:59:47.199697 kernel: io scheduler kyber registered Feb 9 18:59:47.199712 kernel: io scheduler bfq registered Feb 9 18:59:47.199727 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 18:59:47.199743 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:59:47.199757 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 18:59:47.199772 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 18:59:47.199788 kernel: i8042: Warning: Keylock active Feb 9 18:59:47.199805 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 18:59:47.199833 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 18:59:47.199963 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 9 18:59:47.200075 kernel: rtc_cmos 00:00: registered as rtc0 Feb 9 18:59:47.200185 kernel: rtc_cmos 00:00: setting system clock to 2024-02-09T18:59:46 UTC (1707505186) Feb 9 18:59:47.200295 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 9 18:59:47.200313 kernel: intel_pstate: CPU model not supported Feb 9 18:59:47.200392 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:59:47.200409 kernel: Segment Routing with IPv6 Feb 9 18:59:47.200422 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:59:47.200436 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:59:47.200449 kernel: Key type dns_resolver registered Feb 9 18:59:47.200462 kernel: IPI shorthand broadcast: enabled Feb 9 18:59:47.200475 kernel: sched_clock: Marking stable (461236411, 274980146)->(847611668, -111395111) Feb 9 18:59:47.200488 kernel: registered taskstats version 1 Feb 9 18:59:47.200501 kernel: Loading compiled-in X.509 certificates Feb 9 18:59:47.200514 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 18:59:47.200530 kernel: Key type .fscrypt registered Feb 9 18:59:47.200543 kernel: Key type fscrypt-provisioning registered Feb 9 18:59:47.200557 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:59:47.200570 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:59:47.200584 kernel: ima: No architecture policies found Feb 9 18:59:47.200597 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 18:59:47.200611 kernel: Write protecting the kernel read-only data: 28672k Feb 9 18:59:47.200625 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 18:59:47.200638 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 18:59:47.200654 kernel: Run /init as init process Feb 9 18:59:47.200668 kernel: with arguments: Feb 9 18:59:47.200681 kernel: /init Feb 9 18:59:47.200693 kernel: with environment: Feb 9 18:59:47.200705 kernel: HOME=/ Feb 9 18:59:47.200718 kernel: TERM=linux Feb 9 18:59:47.200731 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:59:47.200749 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:59:47.200768 systemd[1]: Detected virtualization amazon. Feb 9 18:59:47.200782 systemd[1]: Detected architecture x86-64. Feb 9 18:59:47.200795 systemd[1]: Running in initrd. Feb 9 18:59:47.200891 systemd[1]: No hostname configured, using default hostname. Feb 9 18:59:47.200922 systemd[1]: Hostname set to . Feb 9 18:59:47.200942 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:59:47.201007 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 18:59:47.201024 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:59:47.201039 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:59:47.201057 systemd[1]: Reached target cryptsetup.target. Feb 9 18:59:47.201071 systemd[1]: Reached target paths.target. Feb 9 18:59:47.201085 systemd[1]: Reached target slices.target. Feb 9 18:59:47.201099 systemd[1]: Reached target swap.target. Feb 9 18:59:47.201113 systemd[1]: Reached target timers.target. Feb 9 18:59:47.201131 systemd[1]: Listening on iscsid.socket. Feb 9 18:59:47.201145 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:59:47.201159 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:59:47.201173 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:59:47.201187 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:59:47.201201 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:59:47.201215 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:59:47.201232 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:59:47.201246 systemd[1]: Reached target sockets.target. Feb 9 18:59:47.201261 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:59:47.201275 systemd[1]: Finished network-cleanup.service. Feb 9 18:59:47.201290 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:59:47.201305 systemd[1]: Starting systemd-journald.service... Feb 9 18:59:47.201319 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:59:47.201334 systemd[1]: Starting systemd-resolved.service... Feb 9 18:59:47.201349 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:59:47.201366 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:59:47.201381 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:59:47.201395 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:59:47.201409 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:59:47.201503 systemd-journald[185]: Journal started Feb 9 18:59:47.201618 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2deaa0bccb548bb0bd03bf89611f0f) is 4.8M, max 38.7M, 33.9M free. Feb 9 18:59:47.211856 systemd-modules-load[186]: Inserted module 'overlay' Feb 9 18:59:47.366360 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:59:47.366408 kernel: Bridge firewalling registered Feb 9 18:59:47.366427 kernel: SCSI subsystem initialized Feb 9 18:59:47.366443 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:59:47.366463 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:59:47.366481 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:59:47.366497 systemd[1]: Started systemd-journald.service. Feb 9 18:59:47.366519 kernel: audit: type=1130 audit(1707505187.357:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.248426 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 9 18:59:47.261884 systemd-resolved[187]: Positive Trust Anchors: Feb 9 18:59:47.378300 kernel: audit: type=1130 audit(1707505187.365:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.378334 kernel: audit: type=1130 audit(1707505187.372:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.261910 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:59:47.261960 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:59:47.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.265863 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 9 18:59:47.288913 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 9 18:59:47.399032 kernel: audit: type=1130 audit(1707505187.380:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.399075 kernel: audit: type=1130 audit(1707505187.393:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.367802 systemd[1]: Started systemd-resolved.service. Feb 9 18:59:47.378504 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:59:47.388250 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:59:47.399226 systemd[1]: Reached target nss-lookup.target. Feb 9 18:59:47.402185 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:59:47.407413 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:59:47.425154 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:59:47.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.431833 kernel: audit: type=1130 audit(1707505187.425:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.437436 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:59:47.446366 kernel: audit: type=1130 audit(1707505187.437:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.439907 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:59:47.456128 dracut-cmdline[206]: dracut-dracut-053 Feb 9 18:59:47.460462 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:59:47.572022 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:59:47.593842 kernel: iscsi: registered transport (tcp) Feb 9 18:59:47.624460 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:59:47.624536 kernel: QLogic iSCSI HBA Driver Feb 9 18:59:47.676281 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:59:47.677878 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:59:47.686309 kernel: audit: type=1130 audit(1707505187.675:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:47.740855 kernel: raid6: avx512x4 gen() 13921 MB/s Feb 9 18:59:47.757863 kernel: raid6: avx512x4 xor() 6615 MB/s Feb 9 18:59:47.775864 kernel: raid6: avx512x2 gen() 16067 MB/s Feb 9 18:59:47.792862 kernel: raid6: avx512x2 xor() 20599 MB/s Feb 9 18:59:47.809860 kernel: raid6: avx512x1 gen() 16142 MB/s Feb 9 18:59:47.826855 kernel: raid6: avx512x1 xor() 19668 MB/s Feb 9 18:59:47.843869 kernel: raid6: avx2x4 gen() 16388 MB/s Feb 9 18:59:47.861849 kernel: raid6: avx2x4 xor() 6484 MB/s Feb 9 18:59:47.878869 kernel: raid6: avx2x2 gen() 10580 MB/s Feb 9 18:59:47.896878 kernel: raid6: avx2x2 xor() 13544 MB/s Feb 9 18:59:47.914860 kernel: raid6: avx2x1 gen() 10246 MB/s Feb 9 18:59:47.931858 kernel: raid6: avx2x1 xor() 12804 MB/s Feb 9 18:59:47.949856 kernel: raid6: sse2x4 gen() 7091 MB/s Feb 9 18:59:47.967956 kernel: raid6: sse2x4 xor() 4015 MB/s Feb 9 18:59:47.986874 kernel: raid6: sse2x2 gen() 7751 MB/s Feb 9 18:59:48.004876 kernel: raid6: sse2x2 xor() 4718 MB/s Feb 9 18:59:48.021865 kernel: raid6: sse2x1 gen() 7478 MB/s Feb 9 18:59:48.040159 kernel: raid6: sse2x1 xor() 3956 MB/s Feb 9 18:59:48.040237 kernel: raid6: using algorithm avx2x4 gen() 16388 MB/s Feb 9 18:59:48.040256 kernel: raid6: .... xor() 6484 MB/s, rmw enabled Feb 9 18:59:48.042014 kernel: raid6: using avx512x2 recovery algorithm Feb 9 18:59:48.059843 kernel: xor: automatically using best checksumming function avx Feb 9 18:59:48.221837 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 18:59:48.233555 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:59:48.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:48.234000 audit: BPF prog-id=7 op=LOAD Feb 9 18:59:48.239000 audit: BPF prog-id=8 op=LOAD Feb 9 18:59:48.242835 kernel: audit: type=1130 audit(1707505188.233:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:48.241050 systemd[1]: Starting systemd-udevd.service... Feb 9 18:59:48.258419 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 9 18:59:48.264348 systemd[1]: Started systemd-udevd.service. Feb 9 18:59:48.266663 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:59:48.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:48.283774 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Feb 9 18:59:48.316354 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:59:48.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:48.320390 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:59:48.385947 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:59:48.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:48.476837 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:59:48.513015 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 18:59:48.513248 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 18:59:48.520850 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 9 18:59:48.527859 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:8e:d5:3d:f0:85 Feb 9 18:59:48.532147 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 18:59:48.532377 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 9 18:59:48.530313 (udev-worker)[432]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:59:48.804602 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 18:59:48.804937 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 18:59:48.804952 kernel: GPT:9289727 != 16777215 Feb 9 18:59:48.804963 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 18:59:48.804974 kernel: GPT:9289727 != 16777215 Feb 9 18:59:48.804989 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 18:59:48.805002 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 18:59:48.805012 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 18:59:48.805023 kernel: AES CTR mode by8 optimization enabled Feb 9 18:59:48.805034 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (424) Feb 9 18:59:48.732836 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:59:48.828011 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:59:48.836378 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:59:48.844804 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:59:48.850518 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:59:48.854951 systemd[1]: Starting disk-uuid.service... Feb 9 18:59:48.864674 disk-uuid[586]: Primary Header is updated. Feb 9 18:59:48.864674 disk-uuid[586]: Secondary Entries is updated. Feb 9 18:59:48.864674 disk-uuid[586]: Secondary Header is updated. Feb 9 18:59:48.870941 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 18:59:48.877837 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 18:59:48.882831 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 18:59:49.886723 disk-uuid[587]: The operation has completed successfully. Feb 9 18:59:49.888235 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 18:59:50.048245 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:59:50.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:50.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:50.048416 systemd[1]: Finished disk-uuid.service. Feb 9 18:59:50.056878 systemd[1]: Starting verity-setup.service... Feb 9 18:59:50.075831 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 18:59:50.173491 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:59:50.176472 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:59:50.181587 systemd[1]: Finished verity-setup.service. Feb 9 18:59:50.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:50.340836 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:59:50.341750 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:59:50.342272 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:59:50.352796 systemd[1]: Starting ignition-setup.service... Feb 9 18:59:50.365351 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:59:50.415435 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 9 18:59:50.418676 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 18:59:50.418715 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 18:59:50.438670 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 18:59:50.462303 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:59:50.504833 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:59:50.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:50.506000 audit: BPF prog-id=9 op=LOAD Feb 9 18:59:50.507760 systemd[1]: Starting systemd-networkd.service... Feb 9 18:59:50.536632 systemd[1]: Finished ignition-setup.service. Feb 9 18:59:50.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:50.539523 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:59:50.545264 systemd-networkd[1096]: lo: Link UP Feb 9 18:59:50.545275 systemd-networkd[1096]: lo: Gained carrier Feb 9 18:59:50.545898 systemd-networkd[1096]: Enumeration completed Feb 9 18:59:50.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:50.546152 systemd-networkd[1096]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:59:50.546686 systemd[1]: Started systemd-networkd.service. Feb 9 18:59:50.549031 systemd[1]: Reached target network.target. Feb 9 18:59:50.562743 systemd[1]: Starting iscsiuio.service... Feb 9 18:59:50.569504 systemd-networkd[1096]: eth0: Link UP Feb 9 18:59:50.571082 systemd-networkd[1096]: eth0: Gained carrier Feb 9 18:59:50.574842 systemd[1]: Started iscsiuio.service. Feb 9 18:59:50.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:50.577943 systemd[1]: Starting iscsid.service... Feb 9 18:59:50.586422 iscsid[1103]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:59:50.586422 iscsid[1103]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:59:50.586422 iscsid[1103]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:59:50.586422 iscsid[1103]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:59:50.586422 iscsid[1103]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:59:50.604258 iscsid[1103]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:59:50.592543 systemd[1]: Started iscsid.service. Feb 9 18:59:50.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:50.610266 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:59:50.613078 systemd-networkd[1096]: eth0: DHCPv4 address 172.31.19.7/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 18:59:50.635450 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:59:50.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:50.635684 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:59:50.638599 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:59:50.639927 systemd[1]: Reached target remote-fs.target. Feb 9 18:59:50.644481 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:59:50.669221 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:59:50.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.086256 ignition[1099]: Ignition 2.14.0 Feb 9 18:59:51.086266 ignition[1099]: Stage: fetch-offline Feb 9 18:59:51.086369 ignition[1099]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:59:51.086399 ignition[1099]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:59:51.104860 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:59:51.105328 ignition[1099]: Ignition finished successfully Feb 9 18:59:51.108977 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:59:51.119844 kernel: kauditd_printk_skb: 16 callbacks suppressed Feb 9 18:59:51.119911 kernel: audit: type=1130 audit(1707505191.109:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.113802 systemd[1]: Starting ignition-fetch.service... Feb 9 18:59:51.132066 ignition[1122]: Ignition 2.14.0 Feb 9 18:59:51.132079 ignition[1122]: Stage: fetch Feb 9 18:59:51.132370 ignition[1122]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:59:51.132406 ignition[1122]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:59:51.144179 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:59:51.146224 ignition[1122]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:59:51.153617 ignition[1122]: INFO : PUT result: OK Feb 9 18:59:51.157592 ignition[1122]: DEBUG : parsed url from cmdline: "" Feb 9 18:59:51.157592 ignition[1122]: INFO : no config URL provided Feb 9 18:59:51.157592 ignition[1122]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:59:51.161797 ignition[1122]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 18:59:51.161797 ignition[1122]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:59:51.161797 ignition[1122]: INFO : PUT result: OK Feb 9 18:59:51.161797 ignition[1122]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 18:59:51.161797 ignition[1122]: INFO : GET result: OK Feb 9 18:59:51.175766 ignition[1122]: DEBUG : parsing config with SHA512: 1d828892b5252a3f323ecf4d3416df21ca5cbdbdc2ef9f77b57193d2a9a041cbe911de4cd5521229139509416ea7800ac1143f4278ca3ce14f89de174d917b69 Feb 9 18:59:51.231672 unknown[1122]: fetched base config from "system" Feb 9 18:59:51.235367 unknown[1122]: fetched base config from "system" Feb 9 18:59:51.235389 unknown[1122]: fetched user config from "aws" Feb 9 18:59:51.237723 ignition[1122]: fetch: fetch complete Feb 9 18:59:51.237733 ignition[1122]: fetch: fetch passed Feb 9 18:59:51.237803 ignition[1122]: Ignition finished successfully Feb 9 18:59:51.241955 systemd[1]: Finished ignition-fetch.service. Feb 9 18:59:51.259431 kernel: audit: type=1130 audit(1707505191.243:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.245353 systemd[1]: Starting ignition-kargs.service... Feb 9 18:59:51.266915 ignition[1128]: Ignition 2.14.0 Feb 9 18:59:51.266928 ignition[1128]: Stage: kargs Feb 9 18:59:51.267137 ignition[1128]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:59:51.267169 ignition[1128]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:59:51.290714 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:59:51.292515 ignition[1128]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:59:51.294195 ignition[1128]: INFO : PUT result: OK Feb 9 18:59:51.298371 ignition[1128]: kargs: kargs passed Feb 9 18:59:51.298446 ignition[1128]: Ignition finished successfully Feb 9 18:59:51.300777 systemd[1]: Finished ignition-kargs.service. Feb 9 18:59:51.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.310169 kernel: audit: type=1130 audit(1707505191.302:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.304819 systemd[1]: Starting ignition-disks.service... Feb 9 18:59:51.320209 ignition[1134]: Ignition 2.14.0 Feb 9 18:59:51.320221 ignition[1134]: Stage: disks Feb 9 18:59:51.320421 ignition[1134]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:59:51.320451 ignition[1134]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:59:51.330925 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:59:51.333185 ignition[1134]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:59:51.336727 ignition[1134]: INFO : PUT result: OK Feb 9 18:59:51.339791 ignition[1134]: disks: disks passed Feb 9 18:59:51.339943 ignition[1134]: Ignition finished successfully Feb 9 18:59:51.342628 systemd[1]: Finished ignition-disks.service. Feb 9 18:59:51.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.345151 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:59:51.349842 kernel: audit: type=1130 audit(1707505191.344:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.351308 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:59:51.351406 systemd[1]: Reached target local-fs.target. Feb 9 18:59:51.354104 systemd[1]: Reached target sysinit.target. Feb 9 18:59:51.356534 systemd[1]: Reached target basic.target. Feb 9 18:59:51.358584 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:59:51.385656 systemd-fsck[1142]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 18:59:51.394323 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:59:51.402933 kernel: audit: type=1130 audit(1707505191.394:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.396528 systemd[1]: Mounting sysroot.mount... Feb 9 18:59:51.417832 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:59:51.419968 systemd[1]: Mounted sysroot.mount. Feb 9 18:59:51.421742 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:59:51.425096 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:59:51.427965 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 18:59:51.428117 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:59:51.428152 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:59:51.437879 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:59:51.441998 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:59:51.444192 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:59:51.455196 initrd-setup-root[1164]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:59:51.465891 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1159) Feb 9 18:59:51.469844 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 9 18:59:51.469900 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 18:59:51.469919 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 18:59:51.475842 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 18:59:51.476907 initrd-setup-root[1190]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:59:51.481183 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:59:51.486735 initrd-setup-root[1198]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:59:51.496540 initrd-setup-root[1206]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:59:51.612076 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:59:51.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.613410 systemd[1]: Starting ignition-mount.service... Feb 9 18:59:51.621145 kernel: audit: type=1130 audit(1707505191.611:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.624010 systemd[1]: Starting sysroot-boot.service... Feb 9 18:59:51.629157 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 18:59:51.629410 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 18:59:51.657190 systemd[1]: Finished sysroot-boot.service. Feb 9 18:59:51.663825 kernel: audit: type=1130 audit(1707505191.657:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.663967 ignition[1226]: INFO : Ignition 2.14.0 Feb 9 18:59:51.663967 ignition[1226]: INFO : Stage: mount Feb 9 18:59:51.668367 ignition[1226]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:59:51.668367 ignition[1226]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:59:51.681593 ignition[1226]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:59:51.683204 ignition[1226]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:59:51.685510 ignition[1226]: INFO : PUT result: OK Feb 9 18:59:51.688994 ignition[1226]: INFO : mount: mount passed Feb 9 18:59:51.689966 ignition[1226]: INFO : Ignition finished successfully Feb 9 18:59:51.691922 systemd[1]: Finished ignition-mount.service. Feb 9 18:59:51.703790 kernel: audit: type=1130 audit(1707505191.693:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:51.695563 systemd[1]: Starting ignition-files.service... Feb 9 18:59:51.707572 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:59:51.722876 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1234) Feb 9 18:59:51.726567 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 9 18:59:51.726630 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 18:59:51.726648 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 18:59:51.734879 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 18:59:51.738042 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:59:51.752933 ignition[1253]: INFO : Ignition 2.14.0 Feb 9 18:59:51.752933 ignition[1253]: INFO : Stage: files Feb 9 18:59:51.755075 ignition[1253]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:59:51.755075 ignition[1253]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:59:51.772148 ignition[1253]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:59:51.774572 ignition[1253]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:59:51.776570 ignition[1253]: INFO : PUT result: OK Feb 9 18:59:51.785906 ignition[1253]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:59:51.792259 ignition[1253]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:59:51.792259 ignition[1253]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:59:51.845791 ignition[1253]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:59:51.848584 ignition[1253]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:59:51.852897 unknown[1253]: wrote ssh authorized keys file for user: core Feb 9 18:59:51.854856 ignition[1253]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:59:51.856519 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 18:59:51.856519 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 18:59:51.856519 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 18:59:51.856519 ignition[1253]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 18:59:51.909157 ignition[1253]: INFO : GET result: OK Feb 9 18:59:52.010013 systemd-networkd[1096]: eth0: Gained IPv6LL Feb 9 18:59:52.077469 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 18:59:52.080387 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 18:59:52.080387 ignition[1253]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 18:59:52.536069 ignition[1253]: INFO : GET result: OK Feb 9 18:59:52.744283 ignition[1253]: DEBUG : file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 18:59:52.746956 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 18:59:52.746956 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 18:59:52.746956 ignition[1253]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 18:59:53.149417 ignition[1253]: INFO : GET result: OK Feb 9 18:59:53.332969 ignition[1253]: DEBUG : file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 18:59:53.335971 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 18:59:53.335971 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:59:53.341119 ignition[1253]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 18:59:53.455275 ignition[1253]: INFO : GET result: OK Feb 9 18:59:54.199194 ignition[1253]: DEBUG : file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 18:59:54.202334 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:59:54.202334 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 18:59:54.202334 ignition[1253]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:59:54.216062 ignition[1253]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3093182554" Feb 9 18:59:54.223773 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1256) Feb 9 18:59:54.223858 ignition[1253]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3093182554": device or resource busy Feb 9 18:59:54.223858 ignition[1253]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3093182554", trying btrfs: device or resource busy Feb 9 18:59:54.223858 ignition[1253]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3093182554" Feb 9 18:59:54.234167 ignition[1253]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3093182554" Feb 9 18:59:54.234167 ignition[1253]: INFO : op(3): [started] unmounting "/mnt/oem3093182554" Feb 9 18:59:54.238350 ignition[1253]: INFO : op(3): [finished] unmounting "/mnt/oem3093182554" Feb 9 18:59:54.238350 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 18:59:54.238350 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:59:54.238350 ignition[1253]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 18:59:54.246887 systemd[1]: mnt-oem3093182554.mount: Deactivated successfully. Feb 9 18:59:54.301171 ignition[1253]: INFO : GET result: OK Feb 9 18:59:54.568301 ignition[1253]: DEBUG : file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 18:59:54.571056 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:59:54.571056 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:59:54.571056 ignition[1253]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 18:59:54.634034 ignition[1253]: INFO : GET result: OK Feb 9 18:59:54.903696 ignition[1253]: DEBUG : file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 18:59:54.906550 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:59:54.906550 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:59:54.910627 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:59:54.910627 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:59:54.915169 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:59:54.917302 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:59:54.919528 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:59:54.921845 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:59:54.924296 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:59:54.924296 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:59:54.930596 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:59:54.930596 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:59:54.936146 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:59:54.936146 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:59:54.936146 ignition[1253]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:59:54.948317 ignition[1253]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2171137444" Feb 9 18:59:54.951903 ignition[1253]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2171137444": device or resource busy Feb 9 18:59:54.951903 ignition[1253]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2171137444", trying btrfs: device or resource busy Feb 9 18:59:54.951903 ignition[1253]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2171137444" Feb 9 18:59:54.951903 ignition[1253]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2171137444" Feb 9 18:59:54.951903 ignition[1253]: INFO : op(6): [started] unmounting "/mnt/oem2171137444" Feb 9 18:59:54.951903 ignition[1253]: INFO : op(6): [finished] unmounting "/mnt/oem2171137444" Feb 9 18:59:54.951903 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:59:54.951903 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 18:59:54.951903 ignition[1253]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:59:54.957915 systemd[1]: mnt-oem2171137444.mount: Deactivated successfully. Feb 9 18:59:54.990689 ignition[1253]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2898413610" Feb 9 18:59:54.993327 ignition[1253]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2898413610": device or resource busy Feb 9 18:59:54.993327 ignition[1253]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2898413610", trying btrfs: device or resource busy Feb 9 18:59:54.993327 ignition[1253]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2898413610" Feb 9 18:59:55.004177 ignition[1253]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2898413610" Feb 9 18:59:55.006629 ignition[1253]: INFO : op(9): [started] unmounting "/mnt/oem2898413610" Feb 9 18:59:55.008043 ignition[1253]: INFO : op(9): [finished] unmounting "/mnt/oem2898413610" Feb 9 18:59:55.008043 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 18:59:55.008043 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 18:59:55.008043 ignition[1253]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:59:55.018521 ignition[1253]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2507344041" Feb 9 18:59:55.020345 ignition[1253]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2507344041": device or resource busy Feb 9 18:59:55.020345 ignition[1253]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2507344041", trying btrfs: device or resource busy Feb 9 18:59:55.020345 ignition[1253]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2507344041" Feb 9 18:59:55.020345 ignition[1253]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2507344041" Feb 9 18:59:55.020345 ignition[1253]: INFO : op(c): [started] unmounting "/mnt/oem2507344041" Feb 9 18:59:55.032071 ignition[1253]: INFO : op(c): [finished] unmounting "/mnt/oem2507344041" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(15): [started] processing unit "amazon-ssm-agent.service" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(15): op(16): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(15): op(16): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(15): [finished] processing unit "amazon-ssm-agent.service" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(17): [started] processing unit "nvidia.service" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(17): [finished] processing unit "nvidia.service" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(18): [started] processing unit "containerd.service" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(18): op(19): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(18): op(19): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(18): [finished] processing unit "containerd.service" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:59:55.032071 ignition[1253]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(20): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(21): [started] setting preset to enabled for "prepare-helm.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(22): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(22): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(23): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(23): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(24): [started] setting preset to enabled for "nvidia.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(24): [finished] setting preset to enabled for "nvidia.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(25): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: op(25): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:59:55.077792 ignition[1253]: INFO : files: createResultFile: createFiles: op(26): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:59:55.136189 kernel: audit: type=1130 audit(1707505195.083:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.136293 ignition[1253]: INFO : files: createResultFile: createFiles: op(26): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:59:55.136293 ignition[1253]: INFO : files: files passed Feb 9 18:59:55.136293 ignition[1253]: INFO : Ignition finished successfully Feb 9 18:59:55.078680 systemd[1]: Finished ignition-files.service. Feb 9 18:59:55.098662 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:59:55.146462 initrd-setup-root-after-ignition[1279]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:59:55.155107 kernel: audit: type=1130 audit(1707505195.144:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.128310 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:59:55.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.130091 systemd[1]: Starting ignition-quench.service... Feb 9 18:59:55.140623 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:59:55.146309 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:59:55.146402 systemd[1]: Finished ignition-quench.service. Feb 9 18:59:55.157993 systemd[1]: Reached target ignition-complete.target. Feb 9 18:59:55.166496 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:59:55.202441 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:59:55.202672 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:59:55.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.208256 systemd[1]: Reached target initrd-fs.target. Feb 9 18:59:55.210940 systemd[1]: Reached target initrd.target. Feb 9 18:59:55.213311 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:59:55.216714 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:59:55.230711 systemd[1]: mnt-oem2898413610.mount: Deactivated successfully. Feb 9 18:59:55.249240 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:59:55.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.252663 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:59:55.265522 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:59:55.269029 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:59:55.271776 systemd[1]: Stopped target timers.target. Feb 9 18:59:55.275018 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:59:55.275178 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:59:55.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.283463 systemd[1]: Stopped target initrd.target. Feb 9 18:59:55.285022 systemd[1]: Stopped target basic.target. Feb 9 18:59:55.287445 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:59:55.290451 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:59:55.293014 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:59:55.295961 systemd[1]: Stopped target remote-fs.target. Feb 9 18:59:55.298501 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:59:55.302864 systemd[1]: Stopped target sysinit.target. Feb 9 18:59:55.304937 systemd[1]: Stopped target local-fs.target. Feb 9 18:59:55.308568 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:59:55.311014 systemd[1]: Stopped target swap.target. Feb 9 18:59:55.313165 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:59:55.314497 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:59:55.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.316886 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:59:55.319105 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:59:55.319282 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:59:55.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.322510 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:59:55.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.322619 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:59:55.324113 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:59:55.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.324210 systemd[1]: Stopped ignition-files.service. Feb 9 18:59:55.331231 systemd[1]: Stopping ignition-mount.service... Feb 9 18:59:55.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.362582 ignition[1292]: INFO : Ignition 2.14.0 Feb 9 18:59:55.362582 ignition[1292]: INFO : Stage: umount Feb 9 18:59:55.362582 ignition[1292]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:59:55.362582 ignition[1292]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:59:55.332890 systemd[1]: Stopping iscsiuio.service... Feb 9 18:59:55.334896 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:59:55.340437 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:59:55.340744 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:59:55.343540 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:59:55.384526 ignition[1292]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:59:55.384526 ignition[1292]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:59:55.349200 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:59:55.388490 ignition[1292]: INFO : PUT result: OK Feb 9 18:59:55.354934 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:59:55.355107 systemd[1]: Stopped iscsiuio.service. Feb 9 18:59:55.359833 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:59:55.360009 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:59:55.394929 ignition[1292]: INFO : umount: umount passed Feb 9 18:59:55.394929 ignition[1292]: INFO : Ignition finished successfully Feb 9 18:59:55.398379 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:59:55.398474 systemd[1]: Stopped ignition-mount.service. Feb 9 18:59:55.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.401963 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:59:55.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.402041 systemd[1]: Stopped ignition-disks.service. Feb 9 18:59:55.404280 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:59:55.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.405281 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:59:55.411395 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 18:59:55.411466 systemd[1]: Stopped ignition-fetch.service. Feb 9 18:59:55.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.414767 systemd[1]: Stopped target network.target. Feb 9 18:59:55.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.417314 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:59:55.417408 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:59:55.419542 systemd[1]: Stopped target paths.target. Feb 9 18:59:55.421678 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:59:55.426384 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:59:55.428586 systemd[1]: Stopped target slices.target. Feb 9 18:59:55.430517 systemd[1]: Stopped target sockets.target. Feb 9 18:59:55.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.431522 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:59:55.431554 systemd[1]: Closed iscsid.socket. Feb 9 18:59:55.432531 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:59:55.432568 systemd[1]: Closed iscsiuio.socket. Feb 9 18:59:55.433501 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:59:55.433545 systemd[1]: Stopped ignition-setup.service. Feb 9 18:59:55.435049 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:59:55.436092 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:59:55.441032 systemd-networkd[1096]: eth0: DHCPv6 lease lost Feb 9 18:59:55.449899 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:59:55.454894 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:59:55.456407 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:59:55.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.458928 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:59:55.459024 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:59:55.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.463683 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:59:55.464969 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:59:55.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.466000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:59:55.466000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:59:55.467290 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:59:55.467325 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:59:55.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.472972 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:59:55.473030 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:59:55.475867 systemd[1]: Stopping network-cleanup.service... Feb 9 18:59:55.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.479642 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:59:55.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.479707 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:59:55.482012 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:59:55.482059 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:59:55.483530 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:59:55.483573 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:59:55.492497 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:59:55.497709 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:59:55.498989 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:59:55.499298 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:59:55.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.513670 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:59:55.516116 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:59:55.519592 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:59:55.519661 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:59:55.523938 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:59:55.524021 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:59:55.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.530299 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:59:55.531775 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:59:55.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.534341 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:59:55.534423 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:59:55.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.538898 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:59:55.547971 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 18:59:55.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.548097 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 18:59:55.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.550623 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:59:55.550672 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:59:55.553241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:59:55.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.553289 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:59:55.557088 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 18:59:55.557713 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:59:55.557836 systemd[1]: Stopped network-cleanup.service. Feb 9 18:59:55.559518 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:59:55.559595 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:59:55.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:59:55.570709 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:59:55.574106 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:59:55.583523 systemd[1]: Switching root. Feb 9 18:59:55.588000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:59:55.588000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:59:55.588000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:59:55.588000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:59:55.588000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:59:55.614303 iscsid[1103]: iscsid shutting down. Feb 9 18:59:55.616806 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Feb 9 18:59:55.617059 systemd-journald[185]: Journal stopped Feb 9 19:00:00.201884 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:00:00.201989 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:00:00.202015 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:00:00.202041 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:00:00.202063 kernel: SELinux: policy capability open_perms=1 Feb 9 19:00:00.202086 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:00:00.202107 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:00:00.202130 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:00:00.202152 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:00:00.202174 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:00:00.202195 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:00:00.202220 systemd[1]: Successfully loaded SELinux policy in 52.263ms. Feb 9 19:00:00.202268 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.957ms. Feb 9 19:00:00.202294 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:00:00.202318 systemd[1]: Detected virtualization amazon. Feb 9 19:00:00.202341 systemd[1]: Detected architecture x86-64. Feb 9 19:00:00.202366 systemd[1]: Detected first boot. Feb 9 19:00:00.202389 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:00:00.202413 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:00:00.202436 kernel: kauditd_printk_skb: 47 callbacks suppressed Feb 9 19:00:00.202468 kernel: audit: type=1400 audit(1707505196.220:84): avc: denied { associate } for pid=1344 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:00:00.202495 kernel: audit: type=1300 audit(1707505196.220:84): arch=c000003e syscall=188 success=yes exit=0 a0=c00014f682 a1=c0000d0ae0 a2=c0000d8a00 a3=32 items=0 ppid=1327 pid=1344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:00.202520 kernel: audit: type=1327 audit(1707505196.220:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:00:00.202542 kernel: audit: type=1400 audit(1707505196.259:85): avc: denied { associate } for pid=1344 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:00:00.202566 kernel: audit: type=1300 audit(1707505196.259:85): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014f759 a2=1ed a3=0 items=2 ppid=1327 pid=1344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:00.202602 kernel: audit: type=1307 audit(1707505196.259:85): cwd="/" Feb 9 19:00:00.202623 kernel: audit: type=1302 audit(1707505196.259:85): item=0 name=(null) inode=2 dev=00:28 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:00.202736 kernel: audit: type=1302 audit(1707505196.259:85): item=1 name=(null) inode=3 dev=00:28 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:00.202764 kernel: audit: type=1327 audit(1707505196.259:85): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:00:00.202789 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:00:00.203124 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:00:00.203158 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:00:00.203189 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:00:00.203223 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:00:00.203246 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:00:00.206788 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:00:00.207327 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:00:00.207356 systemd[1]: Created slice system-getty.slice. Feb 9 19:00:00.207377 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:00:00.207584 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:00:00.207609 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:00:00.207629 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:00:00.207658 systemd[1]: Created slice user.slice. Feb 9 19:00:00.207684 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:00:00.207703 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:00:00.207722 systemd[1]: Set up automount boot.automount. Feb 9 19:00:00.207742 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:00:00.207760 systemd[1]: Reached target integritysetup.target. Feb 9 19:00:00.207780 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:00:00.207798 systemd[1]: Reached target remote-fs.target. Feb 9 19:00:00.207831 systemd[1]: Reached target slices.target. Feb 9 19:00:00.207857 systemd[1]: Reached target swap.target. Feb 9 19:00:00.207879 systemd[1]: Reached target torcx.target. Feb 9 19:00:00.207897 systemd[1]: Reached target veritysetup.target. Feb 9 19:00:00.207917 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:00:00.207936 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:00:00.207957 kernel: audit: type=1400 audit(1707505199.910:86): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:00:00.207979 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:00:00.207999 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:00:00.208018 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:00:00.208037 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:00:00.208058 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:00:00.208076 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:00:00.208095 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:00:00.208114 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:00:00.208133 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:00:00.211337 systemd[1]: Mounting media.mount... Feb 9 19:00:00.211382 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:00:00.211408 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:00:00.211430 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:00:00.211453 systemd[1]: Mounting tmp.mount... Feb 9 19:00:00.211484 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:00:00.211509 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:00:00.211533 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:00:00.211556 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:00:00.211580 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:00:00.211604 systemd[1]: Starting modprobe@drm.service... Feb 9 19:00:00.211628 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:00:00.211652 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:00:00.211675 systemd[1]: Starting modprobe@loop.service... Feb 9 19:00:00.211703 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:00:00.211728 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:00:00.211753 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:00:00.211777 systemd[1]: Starting systemd-journald.service... Feb 9 19:00:00.211801 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:00:00.211866 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:00:00.211890 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:00:00.211913 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:00:00.211937 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:00:00.211964 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:00:00.211989 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:00:00.212011 systemd[1]: Mounted media.mount. Feb 9 19:00:00.212035 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:00:00.212058 kernel: loop: module loaded Feb 9 19:00:00.212081 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:00:00.212105 systemd[1]: Mounted tmp.mount. Feb 9 19:00:00.212126 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:00:00.212150 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:00:00.212178 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:00:00.212200 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:00:00.212223 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:00:00.212246 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:00:00.212274 systemd[1]: Finished modprobe@drm.service. Feb 9 19:00:00.212298 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:00:00.212320 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:00:00.212344 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:00:00.212366 systemd[1]: Finished modprobe@loop.service. Feb 9 19:00:00.212393 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:00:00.212415 kernel: fuse: init (API version 7.34) Feb 9 19:00:00.212441 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:00:00.212464 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:00:00.212487 systemd[1]: Reached target network-pre.target. Feb 9 19:00:00.212513 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:00:00.212537 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:00:00.212561 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:00:00.212584 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:00:00.212615 systemd-journald[1434]: Journal started Feb 9 19:00:00.212707 systemd-journald[1434]: Runtime Journal (/run/log/journal/ec2deaa0bccb548bb0bd03bf89611f0f) is 4.8M, max 38.7M, 33.9M free. Feb 9 18:59:59.910000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:59:59.910000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:00:00.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.218527 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:00:00.186000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:00:00.274948 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:00:00.275014 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:00:00.275040 systemd[1]: Started systemd-journald.service. Feb 9 19:00:00.186000 audit[1434]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffea9f1b2d0 a2=4000 a3=7ffea9f1b36c items=0 ppid=1 pid=1434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:00.186000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:00:00.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.236615 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:00:00.236936 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:00:00.242599 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:00:00.246167 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:00:00.258797 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:00:00.263302 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:00:00.264985 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:00:00.266722 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:00:00.298963 systemd-journald[1434]: Time spent on flushing to /var/log/journal/ec2deaa0bccb548bb0bd03bf89611f0f is 92.872ms for 1165 entries. Feb 9 19:00:00.298963 systemd-journald[1434]: System Journal (/var/log/journal/ec2deaa0bccb548bb0bd03bf89611f0f) is 8.0M, max 195.6M, 187.6M free. Feb 9 19:00:00.416727 systemd-journald[1434]: Received client request to flush runtime journal. Feb 9 19:00:00.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.307905 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:00:00.417927 udevadm[1476]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:00:00.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.359636 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:00:00.363412 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:00:00.417959 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:00:00.438165 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:00:00.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.441753 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:00:00.480974 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:00:00.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:00.483993 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:00:00.528412 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:00:00.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:01.346059 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:00:01.363444 kernel: kauditd_printk_skb: 28 callbacks suppressed Feb 9 19:00:01.363563 kernel: audit: type=1130 audit(1707505201.350:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:01.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:01.359624 systemd[1]: Starting systemd-udevd.service... Feb 9 19:00:01.408769 systemd-udevd[1499]: Using default interface naming scheme 'v252'. Feb 9 19:00:01.473583 systemd[1]: Started systemd-udevd.service. Feb 9 19:00:01.513853 kernel: audit: type=1130 audit(1707505201.477:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:01.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:01.488715 systemd[1]: Starting systemd-networkd.service... Feb 9 19:00:01.533062 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:00:01.673321 (udev-worker)[1511]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:00:01.686843 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:00:01.690362 systemd[1]: Started systemd-userdbd.service. Feb 9 19:00:01.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:01.702837 kernel: audit: type=1130 audit(1707505201.690:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:01.801834 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 19:00:01.814843 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:00:01.820875 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 9 19:00:01.827557 systemd-networkd[1512]: lo: Link UP Feb 9 19:00:01.827567 systemd-networkd[1512]: lo: Gained carrier Feb 9 19:00:01.828186 systemd-networkd[1512]: Enumeration completed Feb 9 19:00:01.828341 systemd[1]: Started systemd-networkd.service. Feb 9 19:00:01.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:01.831443 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:00:01.837829 kernel: audit: type=1130 audit(1707505201.828:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:01.838185 systemd-networkd[1512]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:00:01.845790 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:00:01.845263 systemd-networkd[1512]: eth0: Link UP Feb 9 19:00:01.845436 systemd-networkd[1512]: eth0: Gained carrier Feb 9 19:00:01.853915 kernel: ACPI: button: Sleep Button [SLPF] Feb 9 19:00:01.856123 systemd-networkd[1512]: eth0: DHCPv4 address 172.31.19.7/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:00:01.864841 kernel: audit: type=1400 audit(1707505201.847:117): avc: denied { confidentiality } for pid=1513 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:00:01.847000 audit[1513]: AVC avc: denied { confidentiality } for pid=1513 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:00:01.847000 audit[1513]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b1447497b0 a1=32194 a2=7f58d2a57bc5 a3=5 items=108 ppid=1499 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:01.874883 kernel: audit: type=1300 audit(1707505201.847:117): arch=c000003e syscall=175 success=yes exit=0 a0=55b1447497b0 a1=32194 a2=7f58d2a57bc5 a3=5 items=108 ppid=1499 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:01.889925 kernel: audit: type=1307 audit(1707505201.847:117): cwd="/" Feb 9 19:00:01.890030 kernel: audit: type=1302 audit(1707505201.847:117): item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.890060 kernel: audit: type=1302 audit(1707505201.847:117): item=1 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.890111 kernel: audit: type=1302 audit(1707505201.847:117): item=2 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: CWD cwd="/" Feb 9 19:00:01.847000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=1 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=2 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=3 name=(null) inode=14582 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=4 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=5 name=(null) inode=14583 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=6 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=7 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=8 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=9 name=(null) inode=14585 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=10 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=11 name=(null) inode=14586 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=12 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=13 name=(null) inode=14587 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=14 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=15 name=(null) inode=14588 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=16 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=17 name=(null) inode=14589 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=18 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=19 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=20 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=21 name=(null) inode=14591 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=22 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=23 name=(null) inode=14592 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=24 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=25 name=(null) inode=14593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=26 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=27 name=(null) inode=14594 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=28 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=29 name=(null) inode=14595 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=30 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=31 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=32 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=33 name=(null) inode=14597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=34 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=35 name=(null) inode=14598 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=36 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=37 name=(null) inode=14599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=38 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=39 name=(null) inode=14600 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=40 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=41 name=(null) inode=14601 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=42 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=43 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=44 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=45 name=(null) inode=14603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=46 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=47 name=(null) inode=14604 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=48 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=49 name=(null) inode=14605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=50 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=51 name=(null) inode=14606 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=52 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=53 name=(null) inode=14607 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=55 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=56 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=57 name=(null) inode=14609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=58 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=59 name=(null) inode=14610 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=60 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=61 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=62 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=63 name=(null) inode=14612 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=64 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=65 name=(null) inode=14613 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=66 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=67 name=(null) inode=14614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=68 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=69 name=(null) inode=14615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=70 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=71 name=(null) inode=14616 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=72 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=73 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=74 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=75 name=(null) inode=14618 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=76 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=77 name=(null) inode=14619 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=78 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=79 name=(null) inode=14620 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=80 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=81 name=(null) inode=14621 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=82 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=83 name=(null) inode=14622 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=84 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=85 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=86 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=87 name=(null) inode=14624 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=88 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=89 name=(null) inode=14625 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=90 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=91 name=(null) inode=14626 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=92 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=93 name=(null) inode=14627 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=94 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=95 name=(null) inode=14628 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=96 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=97 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=98 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=99 name=(null) inode=14630 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=100 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=101 name=(null) inode=14631 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=102 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=103 name=(null) inode=14632 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=104 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=105 name=(null) inode=14633 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=106 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PATH item=107 name=(null) inode=14634 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:01.847000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:00:01.909873 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Feb 9 19:00:01.909966 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 9 19:00:01.925145 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:00:01.950840 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1510) Feb 9 19:00:02.058242 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 19:00:02.137430 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:00:02.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:02.148891 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:00:02.173612 lvm[1614]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:00:02.207707 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:00:02.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:02.209490 systemd[1]: Reached target cryptsetup.target. Feb 9 19:00:02.212785 systemd[1]: Starting lvm2-activation.service... Feb 9 19:00:02.223076 lvm[1616]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:00:02.256138 systemd[1]: Finished lvm2-activation.service. Feb 9 19:00:02.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:02.257847 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:00:02.259690 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:00:02.259726 systemd[1]: Reached target local-fs.target. Feb 9 19:00:02.262011 systemd[1]: Reached target machines.target. Feb 9 19:00:02.265062 systemd[1]: Starting ldconfig.service... Feb 9 19:00:02.266900 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:00:02.266985 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:00:02.268719 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:00:02.272584 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:00:02.299338 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:00:02.300832 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:00:02.300941 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:00:02.302863 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:00:02.346440 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1619 (bootctl) Feb 9 19:00:02.348587 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:00:02.361356 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:00:02.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:02.369723 systemd-tmpfiles[1622]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:00:02.374522 systemd-tmpfiles[1622]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:00:02.381372 systemd-tmpfiles[1622]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:00:02.510944 systemd-fsck[1628]: fsck.fat 4.2 (2021-01-31) Feb 9 19:00:02.510944 systemd-fsck[1628]: /dev/nvme0n1p1: 789 files, 115339/258078 clusters Feb 9 19:00:02.515065 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:00:02.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:02.521659 systemd[1]: Mounting boot.mount... Feb 9 19:00:02.570511 systemd[1]: Mounted boot.mount. Feb 9 19:00:02.606655 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:00:02.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:02.775317 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:00:02.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:02.779710 systemd[1]: Starting audit-rules.service... Feb 9 19:00:02.783783 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:00:02.787066 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:00:02.796233 systemd[1]: Starting systemd-resolved.service... Feb 9 19:00:02.800808 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:00:02.811902 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:00:02.815135 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:00:02.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:02.821560 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:00:02.848000 audit[1653]: SYSTEM_BOOT pid=1653 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:00:02.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:02.867555 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:00:02.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:02.938510 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:00:02.948196 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:00:02.951660 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:00:02.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:03.025000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:00:03.025000 audit[1670]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd87c3870 a2=420 a3=0 items=0 ppid=1646 pid=1670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:03.025000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:00:03.027317 systemd[1]: Finished audit-rules.service. Feb 9 19:00:03.027876 augenrules[1670]: No rules Feb 9 19:00:03.078271 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:00:03.079928 systemd[1]: Reached target time-set.target. Feb 9 19:00:03.111168 systemd-resolved[1650]: Positive Trust Anchors: Feb 9 19:00:03.111187 systemd-resolved[1650]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:00:03.111246 systemd-resolved[1650]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:00:03.154246 systemd-resolved[1650]: Defaulting to hostname 'linux'. Feb 9 19:00:03.157136 systemd[1]: Started systemd-resolved.service. Feb 9 19:00:03.599814 systemd-timesyncd[1651]: Contacted time server 192.189.65.187:123 (0.flatcar.pool.ntp.org). Feb 9 19:00:03.599915 systemd-timesyncd[1651]: Initial clock synchronization to Fri 2024-02-09 19:00:03.599637 UTC. Feb 9 19:00:03.600299 systemd[1]: Reached target network.target. Feb 9 19:00:03.601431 systemd[1]: Reached target nss-lookup.target. Feb 9 19:00:03.604068 systemd-resolved[1650]: Clock change detected. Flushing caches. Feb 9 19:00:03.605428 ldconfig[1618]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:00:03.613358 systemd[1]: Finished ldconfig.service. Feb 9 19:00:03.619042 systemd[1]: Starting systemd-update-done.service... Feb 9 19:00:03.629497 systemd[1]: Finished systemd-update-done.service. Feb 9 19:00:03.633808 systemd[1]: Reached target sysinit.target. Feb 9 19:00:03.635938 systemd[1]: Started motdgen.path. Feb 9 19:00:03.637390 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:00:03.640681 systemd[1]: Started logrotate.timer. Feb 9 19:00:03.641918 systemd[1]: Started mdadm.timer. Feb 9 19:00:03.642787 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:00:03.643835 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:00:03.643871 systemd[1]: Reached target paths.target. Feb 9 19:00:03.644906 systemd[1]: Reached target timers.target. Feb 9 19:00:03.646642 systemd[1]: Listening on dbus.socket. Feb 9 19:00:03.650552 systemd[1]: Starting docker.socket... Feb 9 19:00:03.655029 systemd[1]: Listening on sshd.socket. Feb 9 19:00:03.656892 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:00:03.658170 systemd[1]: Listening on docker.socket. Feb 9 19:00:03.660018 systemd[1]: Reached target sockets.target. Feb 9 19:00:03.661920 systemd[1]: Reached target basic.target. Feb 9 19:00:03.664695 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:00:03.664766 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:00:03.664801 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:00:03.667318 systemd[1]: Starting containerd.service... Feb 9 19:00:03.670051 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:00:03.674496 systemd[1]: Starting dbus.service... Feb 9 19:00:03.680097 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:00:03.688766 systemd[1]: Starting extend-filesystems.service... Feb 9 19:00:03.692865 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:00:03.698503 systemd[1]: Starting motdgen.service... Feb 9 19:00:03.702817 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:00:03.707791 systemd[1]: Starting prepare-critools.service... Feb 9 19:00:03.711000 systemd[1]: Starting prepare-helm.service... Feb 9 19:00:03.714950 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:00:03.725449 systemd[1]: Starting sshd-keygen.service... Feb 9 19:00:03.737721 systemd[1]: Starting systemd-logind.service... Feb 9 19:00:03.746951 jq[1686]: false Feb 9 19:00:03.752296 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:00:03.753969 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:00:03.756340 systemd[1]: Starting update-engine.service... Feb 9 19:00:03.767647 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:00:03.773069 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:00:03.773419 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:00:03.794728 jq[1702]: true Feb 9 19:00:03.844540 tar[1705]: crictl Feb 9 19:00:03.848554 tar[1706]: linux-amd64/helm Feb 9 19:00:03.864107 jq[1712]: true Feb 9 19:00:03.877735 tar[1704]: ./ Feb 9 19:00:03.877735 tar[1704]: ./macvlan Feb 9 19:00:03.906431 extend-filesystems[1687]: Found nvme0n1 Feb 9 19:00:03.908229 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:00:03.908680 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:00:03.924049 extend-filesystems[1687]: Found nvme0n1p1 Feb 9 19:00:03.925264 extend-filesystems[1687]: Found nvme0n1p2 Feb 9 19:00:03.934428 extend-filesystems[1687]: Found nvme0n1p3 Feb 9 19:00:03.935638 extend-filesystems[1687]: Found usr Feb 9 19:00:03.936660 extend-filesystems[1687]: Found nvme0n1p4 Feb 9 19:00:03.940657 extend-filesystems[1687]: Found nvme0n1p6 Feb 9 19:00:03.942199 extend-filesystems[1687]: Found nvme0n1p7 Feb 9 19:00:03.946652 extend-filesystems[1687]: Found nvme0n1p9 Feb 9 19:00:03.947825 extend-filesystems[1687]: Checking size of /dev/nvme0n1p9 Feb 9 19:00:03.954005 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:00:03.954338 systemd[1]: Finished motdgen.service. Feb 9 19:00:03.972443 dbus-daemon[1684]: [system] SELinux support is enabled Feb 9 19:00:03.972713 systemd[1]: Started dbus.service. Feb 9 19:00:03.976884 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:00:03.976919 systemd[1]: Reached target system-config.target. Feb 9 19:00:03.978128 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:00:03.978151 systemd[1]: Reached target user-config.target. Feb 9 19:00:03.982913 extend-filesystems[1687]: Resized partition /dev/nvme0n1p9 Feb 9 19:00:03.983789 dbus-daemon[1684]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1512 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 19:00:03.995119 extend-filesystems[1746]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:00:04.006187 dbus-daemon[1684]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:00:04.008532 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 19:00:04.014581 systemd[1]: Starting systemd-hostnamed.service... Feb 9 19:00:04.098102 update_engine[1701]: I0209 19:00:04.097208 1701 main.cc:92] Flatcar Update Engine starting Feb 9 19:00:04.140451 env[1709]: time="2024-02-09T19:00:04.138543608Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:00:04.143732 systemd[1]: Started update-engine.service. Feb 9 19:00:04.147332 systemd[1]: Started locksmithd.service. Feb 9 19:00:04.157966 update_engine[1701]: I0209 19:00:04.144114 1701 update_check_scheduler.cc:74] Next update check in 2m34s Feb 9 19:00:04.158545 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 19:00:04.162695 systemd-networkd[1512]: eth0: Gained IPv6LL Feb 9 19:00:04.166754 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:00:04.205012 extend-filesystems[1746]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 19:00:04.205012 extend-filesystems[1746]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 19:00:04.205012 extend-filesystems[1746]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 19:00:04.168606 systemd[1]: Reached target network-online.target. Feb 9 19:00:04.218578 extend-filesystems[1687]: Resized filesystem in /dev/nvme0n1p9 Feb 9 19:00:04.171842 systemd[1]: Started amazon-ssm-agent.service. Feb 9 19:00:04.175137 systemd[1]: Started nvidia.service. Feb 9 19:00:04.203036 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:00:04.221724 bash[1755]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:00:04.203496 systemd[1]: Finished extend-filesystems.service. Feb 9 19:00:04.234072 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:00:04.271580 systemd-logind[1698]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:00:04.271611 systemd-logind[1698]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 19:00:04.271638 systemd-logind[1698]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:00:04.277982 systemd-logind[1698]: New seat seat0. Feb 9 19:00:04.309743 systemd[1]: Started systemd-logind.service. Feb 9 19:00:04.431891 amazon-ssm-agent[1774]: 2024/02/09 19:00:04 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 19:00:04.456209 amazon-ssm-agent[1774]: Initializing new seelog logger Feb 9 19:00:04.458292 amazon-ssm-agent[1774]: New Seelog Logger Creation Complete Feb 9 19:00:04.460678 amazon-ssm-agent[1774]: 2024/02/09 19:00:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:00:04.462569 amazon-ssm-agent[1774]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:00:04.463034 amazon-ssm-agent[1774]: 2024/02/09 19:00:04 processing appconfig overrides Feb 9 19:00:04.482003 env[1709]: time="2024-02-09T19:00:04.481946749Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:00:04.487940 env[1709]: time="2024-02-09T19:00:04.487839513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:00:04.489732 env[1709]: time="2024-02-09T19:00:04.489683332Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:00:04.494592 env[1709]: time="2024-02-09T19:00:04.494552478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:00:04.495137 env[1709]: time="2024-02-09T19:00:04.495107329Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:00:04.495235 env[1709]: time="2024-02-09T19:00:04.495217286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:00:04.495317 env[1709]: time="2024-02-09T19:00:04.495301527Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:00:04.495382 env[1709]: time="2024-02-09T19:00:04.495368733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:00:04.495569 env[1709]: time="2024-02-09T19:00:04.495553145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:00:04.495978 env[1709]: time="2024-02-09T19:00:04.495955652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:00:04.496836 env[1709]: time="2024-02-09T19:00:04.496806025Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:00:04.496930 env[1709]: time="2024-02-09T19:00:04.496914294Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:00:04.497057 env[1709]: time="2024-02-09T19:00:04.497040347Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:00:04.497134 env[1709]: time="2024-02-09T19:00:04.497122798Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:00:04.514545 env[1709]: time="2024-02-09T19:00:04.514480903Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:00:04.514754 env[1709]: time="2024-02-09T19:00:04.514733508Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:00:04.514859 env[1709]: time="2024-02-09T19:00:04.514843694Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:00:04.515313 env[1709]: time="2024-02-09T19:00:04.515136845Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:00:04.515449 env[1709]: time="2024-02-09T19:00:04.515431751Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:00:04.515576 env[1709]: time="2024-02-09T19:00:04.515559071Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:00:04.515668 env[1709]: time="2024-02-09T19:00:04.515652334Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:00:04.515762 env[1709]: time="2024-02-09T19:00:04.515746488Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:00:04.515956 env[1709]: time="2024-02-09T19:00:04.515840033Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:00:04.516072 env[1709]: time="2024-02-09T19:00:04.516054303Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:00:04.516180 env[1709]: time="2024-02-09T19:00:04.516165038Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:00:04.516271 env[1709]: time="2024-02-09T19:00:04.516257781Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:00:04.516539 env[1709]: time="2024-02-09T19:00:04.516497577Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:00:04.516783 env[1709]: time="2024-02-09T19:00:04.516753235Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:00:04.517716 env[1709]: time="2024-02-09T19:00:04.517693006Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:00:04.517845 env[1709]: time="2024-02-09T19:00:04.517828049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:00:04.517944 env[1709]: time="2024-02-09T19:00:04.517928370Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:00:04.518173 env[1709]: time="2024-02-09T19:00:04.518148013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:00:04.518275 env[1709]: time="2024-02-09T19:00:04.518260001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:00:04.518369 env[1709]: time="2024-02-09T19:00:04.518354797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:00:04.518454 env[1709]: time="2024-02-09T19:00:04.518440669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:00:04.518570 env[1709]: time="2024-02-09T19:00:04.518554206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:00:04.518660 env[1709]: time="2024-02-09T19:00:04.518644836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:00:04.518762 env[1709]: time="2024-02-09T19:00:04.518747869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:00:04.518848 env[1709]: time="2024-02-09T19:00:04.518834733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:00:04.518950 env[1709]: time="2024-02-09T19:00:04.518935672Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:00:04.519368 env[1709]: time="2024-02-09T19:00:04.519346983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:00:04.519493 env[1709]: time="2024-02-09T19:00:04.519476876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:00:04.519596 env[1709]: time="2024-02-09T19:00:04.519580043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:00:04.519686 env[1709]: time="2024-02-09T19:00:04.519671991Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:00:04.519788 env[1709]: time="2024-02-09T19:00:04.519771217Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:00:04.519943 env[1709]: time="2024-02-09T19:00:04.519924148Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:00:04.520056 env[1709]: time="2024-02-09T19:00:04.520039557Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:00:04.520175 env[1709]: time="2024-02-09T19:00:04.520159244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:00:04.520804 env[1709]: time="2024-02-09T19:00:04.520711686Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:00:04.523145 env[1709]: time="2024-02-09T19:00:04.520970302Z" level=info msg="Connect containerd service" Feb 9 19:00:04.523145 env[1709]: time="2024-02-09T19:00:04.521039876Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:00:04.523145 env[1709]: time="2024-02-09T19:00:04.522656708Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:00:04.523145 env[1709]: time="2024-02-09T19:00:04.523031174Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:00:04.524659 env[1709]: time="2024-02-09T19:00:04.524622624Z" level=info msg="Start subscribing containerd event" Feb 9 19:00:04.534605 tar[1704]: ./static Feb 9 19:00:04.534734 env[1709]: time="2024-02-09T19:00:04.534597860Z" level=info msg="Start recovering state" Feb 9 19:00:04.534734 env[1709]: time="2024-02-09T19:00:04.534705652Z" level=info msg="Start event monitor" Feb 9 19:00:04.534734 env[1709]: time="2024-02-09T19:00:04.534721498Z" level=info msg="Start snapshots syncer" Feb 9 19:00:04.534869 env[1709]: time="2024-02-09T19:00:04.534734806Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:00:04.534869 env[1709]: time="2024-02-09T19:00:04.534749709Z" level=info msg="Start streaming server" Feb 9 19:00:04.535051 env[1709]: time="2024-02-09T19:00:04.535009969Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:00:04.535242 systemd[1]: Started containerd.service. Feb 9 19:00:04.535541 env[1709]: time="2024-02-09T19:00:04.535496893Z" level=info msg="containerd successfully booted in 0.442521s" Feb 9 19:00:04.655326 dbus-daemon[1684]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 19:00:04.655593 systemd[1]: Started systemd-hostnamed.service. Feb 9 19:00:04.656182 dbus-daemon[1684]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1751 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 19:00:04.661480 systemd[1]: Starting polkit.service... Feb 9 19:00:04.699263 polkitd[1842]: Started polkitd version 121 Feb 9 19:00:04.719538 polkitd[1842]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 19:00:04.724651 polkitd[1842]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 19:00:04.729448 polkitd[1842]: Finished loading, compiling and executing 2 rules Feb 9 19:00:04.731783 dbus-daemon[1684]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 19:00:04.732044 systemd[1]: Started polkit.service. Feb 9 19:00:04.734258 polkitd[1842]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 19:00:04.769636 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:00:04.774039 systemd-hostnamed[1751]: Hostname set to (transient) Feb 9 19:00:04.774166 systemd-resolved[1650]: System hostname changed to 'ip-172-31-19-7'. Feb 9 19:00:04.794671 tar[1704]: ./vlan Feb 9 19:00:04.999751 coreos-metadata[1683]: Feb 09 19:00:04.999 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 19:00:05.016040 coreos-metadata[1683]: Feb 09 19:00:05.015 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 19:00:05.017546 coreos-metadata[1683]: Feb 09 19:00:05.017 INFO Fetch successful Feb 9 19:00:05.017717 coreos-metadata[1683]: Feb 09 19:00:05.017 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 19:00:05.019858 coreos-metadata[1683]: Feb 09 19:00:05.019 INFO Fetch successful Feb 9 19:00:05.025168 unknown[1683]: wrote ssh authorized keys file for user: core Feb 9 19:00:05.054644 update-ssh-keys[1868]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:00:05.056045 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:00:05.077838 tar[1704]: ./portmap Feb 9 19:00:05.206536 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO Create new startup processor Feb 9 19:00:05.206979 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 19:00:05.207077 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO Initializing bookkeeping folders Feb 9 19:00:05.207157 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO removing the completed state files Feb 9 19:00:05.207227 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO Initializing bookkeeping folders for long running plugins Feb 9 19:00:05.207287 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 19:00:05.207341 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO Initializing healthcheck folders for long running plugins Feb 9 19:00:05.207398 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO Initializing locations for inventory plugin Feb 9 19:00:05.207471 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO Initializing default location for custom inventory Feb 9 19:00:05.207550 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO Initializing default location for file inventory Feb 9 19:00:05.207611 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO Initializing default location for role inventory Feb 9 19:00:05.207769 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO Init the cloudwatchlogs publisher Feb 9 19:00:05.207853 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [instanceID=i-007c39b6927916b9b] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 19:00:05.207934 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [instanceID=i-007c39b6927916b9b] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 19:00:05.208004 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [instanceID=i-007c39b6927916b9b] Successfully loaded platform independent plugin aws:configureDocker Feb 9 19:00:05.208069 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [instanceID=i-007c39b6927916b9b] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 19:00:05.208125 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [instanceID=i-007c39b6927916b9b] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 19:00:05.208182 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [instanceID=i-007c39b6927916b9b] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 19:00:05.208247 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [instanceID=i-007c39b6927916b9b] Successfully loaded platform independent plugin aws:configurePackage Feb 9 19:00:05.208317 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [instanceID=i-007c39b6927916b9b] Successfully loaded platform independent plugin aws:downloadContent Feb 9 19:00:05.208462 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [instanceID=i-007c39b6927916b9b] Successfully loaded platform independent plugin aws:runDocument Feb 9 19:00:05.208462 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [instanceID=i-007c39b6927916b9b] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 19:00:05.208462 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 19:00:05.208462 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO OS: linux, Arch: amd64 Feb 9 19:00:05.210627 amazon-ssm-agent[1774]: datastore file /var/lib/amazon/ssm/i-007c39b6927916b9b/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 19:00:05.256299 tar[1704]: ./host-local Feb 9 19:00:05.306920 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 19:00:05.384538 tar[1704]: ./vrf Feb 9 19:00:05.401146 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 19:00:05.495457 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 19:00:05.512208 tar[1704]: ./bridge Feb 9 19:00:05.590033 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-007c39b6927916b9b, requestId: 97bece88-4b97-47b4-8b58-f8056f3f75bb Feb 9 19:00:05.672491 tar[1704]: ./tuning Feb 9 19:00:05.684655 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 19:00:05.763114 tar[1704]: ./firewall Feb 9 19:00:05.779732 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 19:00:05.874804 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 19:00:05.893554 systemd[1]: Finished prepare-critools.service. Feb 9 19:00:05.934117 tar[1704]: ./host-device Feb 9 19:00:05.970090 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [OfflineService] Starting document processing engine... Feb 9 19:00:05.995105 tar[1704]: ./sbr Feb 9 19:00:06.045986 tar[1704]: ./loopback Feb 9 19:00:06.065556 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [OfflineService] [EngineProcessor] Starting Feb 9 19:00:06.097652 tar[1704]: ./dhcp Feb 9 19:00:06.161471 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 19:00:06.257077 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [OfflineService] Starting message polling Feb 9 19:00:06.275296 tar[1706]: linux-amd64/LICENSE Feb 9 19:00:06.278702 tar[1706]: linux-amd64/README.md Feb 9 19:00:06.294852 systemd[1]: Finished prepare-helm.service. Feb 9 19:00:06.333088 tar[1704]: ./ptp Feb 9 19:00:06.353096 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [OfflineService] Starting send replies to MDS Feb 9 19:00:06.386542 tar[1704]: ./ipvlan Feb 9 19:00:06.444145 tar[1704]: ./bandwidth Feb 9 19:00:06.449471 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 19:00:06.567034 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 19:00:06.576830 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:00:06.663736 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessageGatewayService] listening reply. Feb 9 19:00:06.665202 locksmithd[1768]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:00:06.760854 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 19:00:06.857808 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 19:00:06.955219 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessagingDeliveryService] Starting message polling Feb 9 19:00:07.052009 sshd_keygen[1723]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:00:07.052853 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 19:00:07.077795 systemd[1]: Finished sshd-keygen.service. Feb 9 19:00:07.081242 systemd[1]: Starting issuegen.service... Feb 9 19:00:07.089268 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:00:07.089583 systemd[1]: Finished issuegen.service. Feb 9 19:00:07.093812 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:00:07.105801 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:00:07.109299 systemd[1]: Started getty@tty1.service. Feb 9 19:00:07.112452 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:00:07.113898 systemd[1]: Reached target getty.target. Feb 9 19:00:07.114979 systemd[1]: Reached target multi-user.target. Feb 9 19:00:07.118061 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:00:07.140046 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:00:07.140419 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:00:07.149481 systemd[1]: Startup finished in 9.844s (kernel) + 10.811s (userspace) = 20.655s. Feb 9 19:00:07.161245 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [instanceID=i-007c39b6927916b9b] Starting association polling Feb 9 19:00:07.259444 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 19:00:07.357437 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 19:00:07.455683 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 19:00:07.554139 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 19:00:07.652999 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 19:00:07.751780 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [StartupProcessor] Executing startup processor tasks Feb 9 19:00:07.850861 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 19:00:07.950157 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 19:00:08.049556 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 19:00:08.149210 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-007c39b6927916b9b?role=subscribe&stream=input Feb 9 19:00:08.249098 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-007c39b6927916b9b?role=subscribe&stream=input Feb 9 19:00:08.349173 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 19:00:08.449405 amazon-ssm-agent[1774]: 2024-02-09 19:00:05 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 19:00:12.934942 systemd[1]: Created slice system-sshd.slice. Feb 9 19:00:12.937259 systemd[1]: Started sshd@0-172.31.19.7:22-139.178.68.195:49276.service. Feb 9 19:00:13.124054 sshd[1929]: Accepted publickey for core from 139.178.68.195 port 49276 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:00:13.128299 sshd[1929]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:00:13.148252 systemd[1]: Created slice user-500.slice. Feb 9 19:00:13.151306 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:00:13.158599 systemd-logind[1698]: New session 1 of user core. Feb 9 19:00:13.167056 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:00:13.170456 systemd[1]: Starting user@500.service... Feb 9 19:00:13.181395 (systemd)[1934]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:00:13.340462 systemd[1934]: Queued start job for default target default.target. Feb 9 19:00:13.341341 systemd[1934]: Reached target paths.target. Feb 9 19:00:13.341368 systemd[1934]: Reached target sockets.target. Feb 9 19:00:13.341388 systemd[1934]: Reached target timers.target. Feb 9 19:00:13.341406 systemd[1934]: Reached target basic.target. Feb 9 19:00:13.341584 systemd[1]: Started user@500.service. Feb 9 19:00:13.342897 systemd[1]: Started session-1.scope. Feb 9 19:00:13.343183 systemd[1934]: Reached target default.target. Feb 9 19:00:13.343470 systemd[1934]: Startup finished in 148ms. Feb 9 19:00:13.497458 systemd[1]: Started sshd@1-172.31.19.7:22-139.178.68.195:49282.service. Feb 9 19:00:13.666552 sshd[1943]: Accepted publickey for core from 139.178.68.195 port 49282 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:00:13.668094 sshd[1943]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:00:13.676250 systemd[1]: Started session-2.scope. Feb 9 19:00:13.677245 systemd-logind[1698]: New session 2 of user core. Feb 9 19:00:13.806652 sshd[1943]: pam_unix(sshd:session): session closed for user core Feb 9 19:00:13.810116 systemd[1]: sshd@1-172.31.19.7:22-139.178.68.195:49282.service: Deactivated successfully. Feb 9 19:00:13.812090 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:00:13.812938 systemd-logind[1698]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:00:13.814258 systemd-logind[1698]: Removed session 2. Feb 9 19:00:13.832825 systemd[1]: Started sshd@2-172.31.19.7:22-139.178.68.195:49288.service. Feb 9 19:00:13.999362 sshd[1950]: Accepted publickey for core from 139.178.68.195 port 49288 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:00:14.001626 sshd[1950]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:00:14.008624 systemd-logind[1698]: New session 3 of user core. Feb 9 19:00:14.010612 systemd[1]: Started session-3.scope. Feb 9 19:00:14.133925 sshd[1950]: pam_unix(sshd:session): session closed for user core Feb 9 19:00:14.137230 systemd[1]: sshd@2-172.31.19.7:22-139.178.68.195:49288.service: Deactivated successfully. Feb 9 19:00:14.138597 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:00:14.141014 systemd-logind[1698]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:00:14.142851 systemd-logind[1698]: Removed session 3. Feb 9 19:00:14.159408 systemd[1]: Started sshd@3-172.31.19.7:22-139.178.68.195:49302.service. Feb 9 19:00:14.323767 sshd[1957]: Accepted publickey for core from 139.178.68.195 port 49302 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:00:14.324857 sshd[1957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:00:14.333487 systemd[1]: Started session-4.scope. Feb 9 19:00:14.334039 systemd-logind[1698]: New session 4 of user core. Feb 9 19:00:14.456304 sshd[1957]: pam_unix(sshd:session): session closed for user core Feb 9 19:00:14.459220 systemd[1]: sshd@3-172.31.19.7:22-139.178.68.195:49302.service: Deactivated successfully. Feb 9 19:00:14.460439 systemd-logind[1698]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:00:14.460563 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:00:14.462977 systemd-logind[1698]: Removed session 4. Feb 9 19:00:14.481940 systemd[1]: Started sshd@4-172.31.19.7:22-139.178.68.195:49310.service. Feb 9 19:00:14.646181 sshd[1964]: Accepted publickey for core from 139.178.68.195 port 49310 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:00:14.648161 sshd[1964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:00:14.654993 systemd[1]: Started session-5.scope. Feb 9 19:00:14.655371 systemd-logind[1698]: New session 5 of user core. Feb 9 19:00:14.788349 sudo[1968]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 19:00:14.791262 sudo[1968]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:00:14.801929 dbus-daemon[1684]: Э\x84\xa1xU: received setenforce notice (enforcing=1604510816) Feb 9 19:00:14.805233 sudo[1968]: pam_unix(sudo:session): session closed for user root Feb 9 19:00:14.831600 sshd[1964]: pam_unix(sshd:session): session closed for user core Feb 9 19:00:14.837813 systemd[1]: sshd@4-172.31.19.7:22-139.178.68.195:49310.service: Deactivated successfully. Feb 9 19:00:14.840445 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:00:14.840641 systemd-logind[1698]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:00:14.852486 systemd-logind[1698]: Removed session 5. Feb 9 19:00:14.868828 systemd[1]: Started sshd@5-172.31.19.7:22-139.178.68.195:49318.service. Feb 9 19:00:15.043536 sshd[1972]: Accepted publickey for core from 139.178.68.195 port 49318 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:00:15.045707 sshd[1972]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:00:15.056130 systemd[1]: Started session-6.scope. Feb 9 19:00:15.056469 systemd-logind[1698]: New session 6 of user core. Feb 9 19:00:15.165602 sudo[1977]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 19:00:15.165941 sudo[1977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:00:15.171161 sudo[1977]: pam_unix(sudo:session): session closed for user root Feb 9 19:00:15.178709 sudo[1976]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 19:00:15.179087 sudo[1976]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:00:15.191076 systemd[1]: Stopping audit-rules.service... Feb 9 19:00:15.199293 kernel: kauditd_printk_skb: 121 callbacks suppressed Feb 9 19:00:15.199407 kernel: audit: type=1305 audit(1707505215.191:131): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:00:15.199440 kernel: audit: type=1300 audit(1707505215.191:131): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe631785d0 a2=420 a3=0 items=0 ppid=1 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:15.191000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:00:15.191000 audit[1980]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe631785d0 a2=420 a3=0 items=0 ppid=1 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:15.195334 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 19:00:15.199873 auditctl[1980]: No rules Feb 9 19:00:15.195652 systemd[1]: Stopped audit-rules.service. Feb 9 19:00:15.199050 systemd[1]: Starting audit-rules.service... Feb 9 19:00:15.216532 kernel: audit: type=1327 audit(1707505215.191:131): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:00:15.191000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:00:15.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:15.229692 kernel: audit: type=1131 audit(1707505215.194:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:15.242915 augenrules[1998]: No rules Feb 9 19:00:15.243647 systemd[1]: Finished audit-rules.service. Feb 9 19:00:15.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:15.243000 audit[1976]: USER_END pid=1976 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:00:15.245184 sudo[1976]: pam_unix(sudo:session): session closed for user root Feb 9 19:00:15.255052 kernel: audit: type=1130 audit(1707505215.242:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:15.255111 kernel: audit: type=1106 audit(1707505215.243:134): pid=1976 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:00:15.243000 audit[1976]: CRED_DISP pid=1976 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:00:15.260892 kernel: audit: type=1104 audit(1707505215.243:135): pid=1976 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:00:15.269717 sshd[1972]: pam_unix(sshd:session): session closed for user core Feb 9 19:00:15.281567 kernel: audit: type=1106 audit(1707505215.270:136): pid=1972 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:00:15.270000 audit[1972]: USER_END pid=1972 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:00:15.274647 systemd[1]: sshd@5-172.31.19.7:22-139.178.68.195:49318.service: Deactivated successfully. Feb 9 19:00:15.275758 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:00:15.270000 audit[1972]: CRED_DISP pid=1972 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:00:15.284144 systemd-logind[1698]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:00:15.286594 systemd-logind[1698]: Removed session 6. Feb 9 19:00:15.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.19.7:22-139.178.68.195:49318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:15.293950 systemd[1]: Started sshd@6-172.31.19.7:22-139.178.68.195:49334.service. Feb 9 19:00:15.296404 kernel: audit: type=1104 audit(1707505215.270:137): pid=1972 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:00:15.296444 kernel: audit: type=1131 audit(1707505215.270:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.19.7:22-139.178.68.195:49318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:15.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.19.7:22-139.178.68.195:49334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:15.457000 audit[2005]: USER_ACCT pid=2005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:00:15.459551 sshd[2005]: Accepted publickey for core from 139.178.68.195 port 49334 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:00:15.459000 audit[2005]: CRED_ACQ pid=2005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:00:15.459000 audit[2005]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9acd4d00 a2=3 a3=0 items=0 ppid=1 pid=2005 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:15.459000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:00:15.461673 sshd[2005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:00:15.472061 systemd-logind[1698]: New session 7 of user core. Feb 9 19:00:15.472630 systemd[1]: Started session-7.scope. Feb 9 19:00:15.489000 audit[2005]: USER_START pid=2005 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:00:15.491000 audit[2008]: CRED_ACQ pid=2008 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:00:15.581000 audit[2009]: USER_ACCT pid=2009 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:00:15.581000 audit[2009]: CRED_REFR pid=2009 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:00:15.583330 sudo[2009]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:00:15.583650 sudo[2009]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:00:15.584000 audit[2009]: USER_START pid=2009 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.423851 systemd[1]: Starting docker.service... Feb 9 19:00:16.477439 env[2024]: time="2024-02-09T19:00:16.477392790Z" level=info msg="Starting up" Feb 9 19:00:16.479366 env[2024]: time="2024-02-09T19:00:16.479326175Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:00:16.479366 env[2024]: time="2024-02-09T19:00:16.479352540Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:00:16.479553 env[2024]: time="2024-02-09T19:00:16.479376298Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:00:16.479553 env[2024]: time="2024-02-09T19:00:16.479390723Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:00:16.481801 env[2024]: time="2024-02-09T19:00:16.481765829Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:00:16.481801 env[2024]: time="2024-02-09T19:00:16.481787436Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:00:16.481961 env[2024]: time="2024-02-09T19:00:16.481808466Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:00:16.481961 env[2024]: time="2024-02-09T19:00:16.481821126Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:00:16.490583 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport400188216-merged.mount: Deactivated successfully. Feb 9 19:00:16.611338 env[2024]: time="2024-02-09T19:00:16.611293749Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 19:00:16.611338 env[2024]: time="2024-02-09T19:00:16.611325758Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 19:00:16.611637 env[2024]: time="2024-02-09T19:00:16.611578415Z" level=info msg="Loading containers: start." Feb 9 19:00:16.673000 audit[2054]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.673000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff80079dd0 a2=0 a3=7fff80079dbc items=0 ppid=2024 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.673000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 9 19:00:16.676000 audit[2056]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.676000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff4b031e60 a2=0 a3=7fff4b031e4c items=0 ppid=2024 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.676000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 9 19:00:16.679000 audit[2058]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.679000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd8b7709f0 a2=0 a3=7ffd8b7709dc items=0 ppid=2024 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.679000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 19:00:16.681000 audit[2060]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.681000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc69820630 a2=0 a3=7ffc6982061c items=0 ppid=2024 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.681000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 19:00:16.685000 audit[2062]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.685000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdf534f870 a2=0 a3=7ffdf534f85c items=0 ppid=2024 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.685000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 9 19:00:16.703000 audit[2067]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.703000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff44fc5cf0 a2=0 a3=7fff44fc5cdc items=0 ppid=2024 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.703000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 9 19:00:16.718000 audit[2069]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.718000 audit[2069]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffca526a110 a2=0 a3=7ffca526a0fc items=0 ppid=2024 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.718000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 9 19:00:16.721000 audit[2071]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.721000 audit[2071]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe8ce295e0 a2=0 a3=7ffe8ce295cc items=0 ppid=2024 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.721000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 9 19:00:16.723000 audit[2073]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.723000 audit[2073]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc87f28220 a2=0 a3=7ffc87f2820c items=0 ppid=2024 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.723000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:00:16.735000 audit[2077]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.735000 audit[2077]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff38256030 a2=0 a3=7fff3825601c items=0 ppid=2024 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.735000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:00:16.736000 audit[2078]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.736000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc464e6040 a2=0 a3=7ffc464e602c items=0 ppid=2024 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.736000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:00:16.758189 kernel: Initializing XFRM netlink socket Feb 9 19:00:16.808176 env[2024]: time="2024-02-09T19:00:16.807959258Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:00:16.809726 (udev-worker)[2034]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:00:16.868000 audit[2087]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.868000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe916ff8e0 a2=0 a3=7ffe916ff8cc items=0 ppid=2024 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.868000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 9 19:00:16.949000 audit[2090]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.949000 audit[2090]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffda6883280 a2=0 a3=7ffda688326c items=0 ppid=2024 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.949000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 9 19:00:16.954000 audit[2093]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.954000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffc7553f60 a2=0 a3=7fffc7553f4c items=0 ppid=2024 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.954000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 9 19:00:16.956000 audit[2095]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.956000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdce650d00 a2=0 a3=7ffdce650cec items=0 ppid=2024 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.956000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 9 19:00:16.959000 audit[2097]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.959000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffcc25fbcb0 a2=0 a3=7ffcc25fbc9c items=0 ppid=2024 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.959000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 9 19:00:16.961000 audit[2099]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.961000 audit[2099]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffe36225710 a2=0 a3=7ffe362256fc items=0 ppid=2024 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.961000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 9 19:00:16.964000 audit[2101]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.964000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe210ec070 a2=0 a3=7ffe210ec05c items=0 ppid=2024 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.964000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 9 19:00:16.976000 audit[2104]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.976000 audit[2104]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fff5e178fa0 a2=0 a3=7fff5e178f8c items=0 ppid=2024 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.976000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 9 19:00:16.979000 audit[2106]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.979000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffff49b9f00 a2=0 a3=7ffff49b9eec items=0 ppid=2024 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.979000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 19:00:16.982000 audit[2108]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.982000 audit[2108]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd7938ed90 a2=0 a3=7ffd7938ed7c items=0 ppid=2024 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.982000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 19:00:16.985000 audit[2110]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:16.985000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc3b05a990 a2=0 a3=7ffc3b05a97c items=0 ppid=2024 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:16.985000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 9 19:00:16.987417 systemd-networkd[1512]: docker0: Link UP Feb 9 19:00:17.000000 audit[2114]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:17.000000 audit[2114]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd68b0aab0 a2=0 a3=7ffd68b0aa9c items=0 ppid=2024 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:17.000000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:00:17.001000 audit[2115]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:17.001000 audit[2115]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc05c27b00 a2=0 a3=7ffc05c27aec items=0 ppid=2024 pid=2115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:17.001000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:00:17.003860 env[2024]: time="2024-02-09T19:00:17.003793584Z" level=info msg="Loading containers: done." Feb 9 19:00:17.019440 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1468936965-merged.mount: Deactivated successfully. Feb 9 19:00:17.040777 env[2024]: time="2024-02-09T19:00:17.040725430Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:00:17.041338 env[2024]: time="2024-02-09T19:00:17.041304079Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:00:17.041682 env[2024]: time="2024-02-09T19:00:17.041656837Z" level=info msg="Daemon has completed initialization" Feb 9 19:00:17.066009 systemd[1]: Started docker.service. Feb 9 19:00:17.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.082258 env[2024]: time="2024-02-09T19:00:17.082197705Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:00:17.130185 systemd[1]: Reloading. Feb 9 19:00:17.220917 /usr/lib/systemd/system-generators/torcx-generator[2164]: time="2024-02-09T19:00:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:00:17.220958 /usr/lib/systemd/system-generators/torcx-generator[2164]: time="2024-02-09T19:00:17Z" level=info msg="torcx already run" Feb 9 19:00:17.361150 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:00:17.361173 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:00:17.384662 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:00:17.490657 systemd[1]: Started kubelet.service. Feb 9 19:00:17.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.579550 kubelet[2219]: E0209 19:00:17.576458 2219 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:00:17.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:00:17.581449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:00:17.581820 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:00:18.122425 env[1709]: time="2024-02-09T19:00:18.122380617Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:00:18.789454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1008644840.mount: Deactivated successfully. Feb 9 19:00:21.913201 env[1709]: time="2024-02-09T19:00:21.913152209Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:21.917675 env[1709]: time="2024-02-09T19:00:21.917625544Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:21.920160 env[1709]: time="2024-02-09T19:00:21.920125425Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:21.922443 env[1709]: time="2024-02-09T19:00:21.922409502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:21.923980 env[1709]: time="2024-02-09T19:00:21.923934584Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 19:00:21.940603 env[1709]: time="2024-02-09T19:00:21.940553765Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:00:25.423888 env[1709]: time="2024-02-09T19:00:25.423837773Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:25.426626 env[1709]: time="2024-02-09T19:00:25.426583428Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:25.428882 env[1709]: time="2024-02-09T19:00:25.428843987Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:25.430863 env[1709]: time="2024-02-09T19:00:25.430828068Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:25.431869 env[1709]: time="2024-02-09T19:00:25.431830686Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 19:00:25.449905 env[1709]: time="2024-02-09T19:00:25.449857303Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:00:27.289981 amazon-ssm-agent[1774]: 2024-02-09 19:00:27 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 19:00:27.422088 env[1709]: time="2024-02-09T19:00:27.422034117Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:27.424672 env[1709]: time="2024-02-09T19:00:27.424631918Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:27.427282 env[1709]: time="2024-02-09T19:00:27.427244257Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:27.429691 env[1709]: time="2024-02-09T19:00:27.429656780Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:27.430421 env[1709]: time="2024-02-09T19:00:27.430387331Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 19:00:27.442229 env[1709]: time="2024-02-09T19:00:27.442189282Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:00:27.766097 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:00:27.791573 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 9 19:00:27.792710 kernel: audit: type=1130 audit(1707505227.764:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.792769 kernel: audit: type=1131 audit(1707505227.764:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.766375 systemd[1]: Stopped kubelet.service. Feb 9 19:00:27.770871 systemd[1]: Started kubelet.service. Feb 9 19:00:27.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.803549 kernel: audit: type=1130 audit(1707505227.769:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.853699 kubelet[2255]: E0209 19:00:27.853656 2255 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:00:27.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:00:27.857479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:00:27.857736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:00:27.864565 kernel: audit: type=1131 audit(1707505227.856:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:00:28.749523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2699010053.mount: Deactivated successfully. Feb 9 19:00:29.555884 env[1709]: time="2024-02-09T19:00:29.555836035Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:29.564069 env[1709]: time="2024-02-09T19:00:29.564019952Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:29.566585 env[1709]: time="2024-02-09T19:00:29.566545289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:29.568364 env[1709]: time="2024-02-09T19:00:29.568308471Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:29.569290 env[1709]: time="2024-02-09T19:00:29.569250823Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:00:29.587327 env[1709]: time="2024-02-09T19:00:29.587281577Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:00:30.092488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2946104500.mount: Deactivated successfully. Feb 9 19:00:30.104755 env[1709]: time="2024-02-09T19:00:30.104705383Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:30.114392 env[1709]: time="2024-02-09T19:00:30.114346850Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:30.120051 env[1709]: time="2024-02-09T19:00:30.119991710Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:30.123169 env[1709]: time="2024-02-09T19:00:30.123125004Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:30.124817 env[1709]: time="2024-02-09T19:00:30.124585334Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:00:30.137903 env[1709]: time="2024-02-09T19:00:30.137864741Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:00:31.146572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2373048615.mount: Deactivated successfully. Feb 9 19:00:34.811383 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 19:00:34.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:34.821631 kernel: audit: type=1131 audit(1707505234.810:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:37.091814 env[1709]: time="2024-02-09T19:00:37.091760482Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:37.095414 env[1709]: time="2024-02-09T19:00:37.095370664Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:37.098393 env[1709]: time="2024-02-09T19:00:37.098353077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:37.100640 env[1709]: time="2024-02-09T19:00:37.100582365Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:37.101525 env[1709]: time="2024-02-09T19:00:37.101473543Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 19:00:37.124289 env[1709]: time="2024-02-09T19:00:37.124246036Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:00:37.822525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1543067241.mount: Deactivated successfully. Feb 9 19:00:38.042708 kernel: audit: type=1130 audit(1707505238.015:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:38.042957 kernel: audit: type=1131 audit(1707505238.015:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:38.043006 kernel: audit: type=1130 audit(1707505238.026:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:38.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:38.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:38.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:38.016348 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:00:38.016705 systemd[1]: Stopped kubelet.service. Feb 9 19:00:38.028003 systemd[1]: Started kubelet.service. Feb 9 19:00:38.186905 kubelet[2281]: E0209 19:00:38.186718 2281 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:00:38.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:00:38.189632 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:00:38.189909 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:00:38.195529 kernel: audit: type=1131 audit(1707505238.190:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:00:38.714631 env[1709]: time="2024-02-09T19:00:38.714579972Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:38.718127 env[1709]: time="2024-02-09T19:00:38.718017451Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:38.720821 env[1709]: time="2024-02-09T19:00:38.720788647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:38.724531 env[1709]: time="2024-02-09T19:00:38.724475913Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:38.725353 env[1709]: time="2024-02-09T19:00:38.725315400Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 19:00:41.137878 systemd[1]: Stopped kubelet.service. Feb 9 19:00:41.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:41.149272 kernel: audit: type=1130 audit(1707505241.138:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:41.149406 kernel: audit: type=1131 audit(1707505241.138:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:41.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:41.164118 systemd[1]: Reloading. Feb 9 19:00:41.279799 /usr/lib/systemd/system-generators/torcx-generator[2369]: time="2024-02-09T19:00:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:00:41.279836 /usr/lib/systemd/system-generators/torcx-generator[2369]: time="2024-02-09T19:00:41Z" level=info msg="torcx already run" Feb 9 19:00:41.422507 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:00:41.422725 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:00:41.461334 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:00:41.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:41.607677 systemd[1]: Started kubelet.service. Feb 9 19:00:41.619550 kernel: audit: type=1130 audit(1707505241.607:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:41.688418 kubelet[2424]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:00:41.688418 kubelet[2424]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:00:41.689650 kubelet[2424]: I0209 19:00:41.689595 2424 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:00:41.692489 kubelet[2424]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:00:41.692489 kubelet[2424]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:00:42.354187 kubelet[2424]: I0209 19:00:42.354129 2424 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:00:42.354187 kubelet[2424]: I0209 19:00:42.354180 2424 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:00:42.354729 kubelet[2424]: I0209 19:00:42.354716 2424 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:00:42.377340 kubelet[2424]: E0209 19:00:42.377309 2424 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:42.377634 kubelet[2424]: I0209 19:00:42.377618 2424 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:00:42.380792 kubelet[2424]: I0209 19:00:42.380760 2424 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:00:42.383378 kubelet[2424]: I0209 19:00:42.383343 2424 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:00:42.383533 kubelet[2424]: I0209 19:00:42.383463 2424 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:00:42.384466 kubelet[2424]: I0209 19:00:42.384444 2424 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:00:42.384548 kubelet[2424]: I0209 19:00:42.384477 2424 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:00:42.385989 kubelet[2424]: I0209 19:00:42.385964 2424 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:00:42.409095 kubelet[2424]: I0209 19:00:42.409061 2424 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:00:42.409095 kubelet[2424]: I0209 19:00:42.409095 2424 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:00:42.409282 kubelet[2424]: I0209 19:00:42.409128 2424 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:00:42.410009 kubelet[2424]: I0209 19:00:42.409983 2424 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:00:42.412075 kubelet[2424]: W0209 19:00:42.412002 2424 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.19.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-7&limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:42.412075 kubelet[2424]: E0209 19:00:42.412066 2424 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-7&limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:42.414740 kubelet[2424]: W0209 19:00:42.414613 2424 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.19.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:42.414988 kubelet[2424]: E0209 19:00:42.414973 2424 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:42.415200 kubelet[2424]: I0209 19:00:42.415185 2424 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:00:42.418452 kubelet[2424]: W0209 19:00:42.418421 2424 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:00:42.422357 kubelet[2424]: I0209 19:00:42.422328 2424 server.go:1186] "Started kubelet" Feb 9 19:00:42.423708 kubelet[2424]: I0209 19:00:42.423685 2424 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:00:42.425117 kubelet[2424]: I0209 19:00:42.425089 2424 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:00:42.426299 kubelet[2424]: E0209 19:00:42.426184 2424 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-19-7.17b246f8a2b218a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-19-7", UID:"ip-172-31-19-7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-19-7"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 422253730, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 422253730, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.19.7:6443/api/v1/namespaces/default/events": dial tcp 172.31.19.7:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:00:42.428566 kubelet[2424]: E0209 19:00:42.428550 2424 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:00:42.428686 kubelet[2424]: E0209 19:00:42.428675 2424 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:00:42.428000 audit[2424]: AVC avc: denied { mac_admin } for pid=2424 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:00:42.429591 kubelet[2424]: I0209 19:00:42.429576 2424 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:00:42.429707 kubelet[2424]: I0209 19:00:42.429697 2424 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:00:42.430009 kubelet[2424]: I0209 19:00:42.429994 2424 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:00:42.428000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:00:42.438194 kernel: audit: type=1400 audit(1707505242.428:187): avc: denied { mac_admin } for pid=2424 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:00:42.438412 kernel: audit: type=1401 audit(1707505242.428:187): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:00:42.438527 kernel: audit: type=1300 audit(1707505242.428:187): arch=c000003e syscall=188 success=no exit=-22 a0=c0006df500 a1=c000816fa8 a2=c0006df440 a3=25 items=0 ppid=1 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.428000 audit[2424]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0006df500 a1=c000816fa8 a2=c0006df440 a3=25 items=0 ppid=1 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.440983 kubelet[2424]: E0209 19:00:42.440965 2424 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-19-7\" not found" Feb 9 19:00:42.441160 kubelet[2424]: I0209 19:00:42.441149 2424 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:00:42.441333 kubelet[2424]: I0209 19:00:42.441323 2424 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:00:42.442153 kubelet[2424]: W0209 19:00:42.442106 2424 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.19.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:42.442284 kubelet[2424]: E0209 19:00:42.442273 2424 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:42.443563 kubelet[2424]: E0209 19:00:42.443541 2424 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.19.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-7?timeout=10s": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:42.428000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:00:42.452400 kernel: audit: type=1327 audit(1707505242.428:187): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:00:42.452636 kernel: audit: type=1400 audit(1707505242.428:188): avc: denied { mac_admin } for pid=2424 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:00:42.428000 audit[2424]: AVC avc: denied { mac_admin } for pid=2424 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:00:42.428000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:00:42.461152 kernel: audit: type=1401 audit(1707505242.428:188): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:00:42.461251 kernel: audit: type=1300 audit(1707505242.428:188): arch=c000003e syscall=188 success=no exit=-22 a0=c000afc720 a1=c000816fc0 a2=c0006df620 a3=25 items=0 ppid=1 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.428000 audit[2424]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000afc720 a1=c000816fc0 a2=c0006df620 a3=25 items=0 ppid=1 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.428000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:00:42.478000 audit[2437]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.478000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc5b41f080 a2=0 a3=7ffc5b41f06c items=0 ppid=2424 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:00:42.481000 audit[2438]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.481000 audit[2438]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff313bf220 a2=0 a3=7fff313bf20c items=0 ppid=2424 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.481000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:00:42.495000 audit[2440]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.495000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc1a1a0190 a2=0 a3=7ffc1a1a017c items=0 ppid=2424 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.495000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:00:42.502000 audit[2442]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.502000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffde5c8dae0 a2=0 a3=7ffde5c8dacc items=0 ppid=2424 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.502000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:00:42.521395 kubelet[2424]: I0209 19:00:42.521375 2424 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:00:42.521617 kubelet[2424]: I0209 19:00:42.521606 2424 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:00:42.521686 kubelet[2424]: I0209 19:00:42.521680 2424 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:00:42.523000 audit[2445]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.523000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc022f08b0 a2=0 a3=7ffc022f089c items=0 ppid=2424 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.523000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 19:00:42.525000 audit[2446]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.525000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff515a8920 a2=0 a3=7fff515a890c items=0 ppid=2424 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.525000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:00:42.534000 audit[2451]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.534000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffc61cbd10 a2=0 a3=7fffc61cbcfc items=0 ppid=2424 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.534000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:00:42.537683 kubelet[2424]: I0209 19:00:42.537648 2424 policy_none.go:49] "None policy: Start" Feb 9 19:00:42.538654 kubelet[2424]: I0209 19:00:42.538628 2424 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:00:42.538654 kubelet[2424]: I0209 19:00:42.538655 2424 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:00:42.542000 audit[2454]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.542000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffde0146350 a2=0 a3=7ffde014633c items=0 ppid=2424 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.542000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:00:42.544457 kubelet[2424]: I0209 19:00:42.544163 2424 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-7" Feb 9 19:00:42.544636 kubelet[2424]: E0209 19:00:42.544623 2424 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.19.7:6443/api/v1/nodes\": dial tcp 172.31.19.7:6443: connect: connection refused" node="ip-172-31-19-7" Feb 9 19:00:42.544000 audit[2456]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.544000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc8fb754f0 a2=0 a3=7ffc8fb754dc items=0 ppid=2424 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.544000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:00:42.552066 kubelet[2424]: I0209 19:00:42.552041 2424 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:00:42.553000 audit[2424]: AVC avc: denied { mac_admin } for pid=2424 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:00:42.553000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:00:42.553000 audit[2424]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d264b0 a1=c000c7a9d8 a2=c000d26480 a3=25 items=0 ppid=1 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.553000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:00:42.554574 kubelet[2424]: I0209 19:00:42.554556 2424 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:00:42.554926 kubelet[2424]: I0209 19:00:42.554911 2424 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:00:42.556094 kubelet[2424]: E0209 19:00:42.556078 2424 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-7\" not found" Feb 9 19:00:42.556000 audit[2457]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.556000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe58689690 a2=0 a3=7ffe5868967c items=0 ppid=2424 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.556000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:00:42.561000 audit[2459]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.561000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc8054a060 a2=0 a3=7ffc8054a04c items=0 ppid=2424 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.561000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:00:42.564000 audit[2461]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.564000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcab4679d0 a2=0 a3=7ffcab4679bc items=0 ppid=2424 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:00:42.567000 audit[2463]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.567000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffde124a120 a2=0 a3=7ffde124a10c items=0 ppid=2424 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.567000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:00:42.570000 audit[2465]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.570000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffe5957e020 a2=0 a3=7ffe5957e00c items=0 ppid=2424 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.570000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:00:42.573000 audit[2467]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=2467 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.573000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffedab3e680 a2=0 a3=7ffedab3e66c items=0 ppid=2424 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.573000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:00:42.574435 kubelet[2424]: I0209 19:00:42.574412 2424 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:00:42.575000 audit[2468]: NETFILTER_CFG table=mangle:41 family=10 entries=2 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:00:42.575000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdaeb8ccf0 a2=0 a3=7ffdaeb8ccdc items=0 ppid=2424 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.575000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:00:42.575000 audit[2469]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.575000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffddeb9e640 a2=0 a3=7ffddeb9e62c items=0 ppid=2424 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.575000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:00:42.577000 audit[2471]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.577000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1a03b500 a2=0 a3=7ffe1a03b4ec items=0 ppid=2424 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.577000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:00:42.578000 audit[2470]: NETFILTER_CFG table=nat:44 family=10 entries=2 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:00:42.578000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd40699e50 a2=0 a3=10e3 items=0 ppid=2424 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.578000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:00:42.579000 audit[2472]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:00:42.579000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff099bf120 a2=0 a3=7fff099bf10c items=0 ppid=2424 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:00:42.581000 audit[2474]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_rule pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:00:42.581000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc43a979b0 a2=0 a3=7ffc43a9799c items=0 ppid=2424 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.581000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:00:42.582000 audit[2475]: NETFILTER_CFG table=filter:47 family=10 entries=2 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:00:42.582000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fff9dae9ff0 a2=0 a3=7fff9dae9fdc items=0 ppid=2424 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:00:42.585000 audit[2477]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_rule pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:00:42.585000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffedd97e5c0 a2=0 a3=7ffedd97e5ac items=0 ppid=2424 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:00:42.586000 audit[2478]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:00:42.586000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc84d7a0d0 a2=0 a3=7ffc84d7a0bc items=0 ppid=2424 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.586000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:00:42.587000 audit[2479]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:00:42.587000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1dd58d70 a2=0 a3=7fff1dd58d5c items=0 ppid=2424 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:00:42.590000 audit[2481]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_rule pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:00:42.590000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc287fe4c0 a2=0 a3=7ffc287fe4ac items=0 ppid=2424 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.590000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:00:42.593000 audit[2483]: NETFILTER_CFG table=nat:52 family=10 entries=2 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:00:42.593000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe401da7f0 a2=0 a3=7ffe401da7dc items=0 ppid=2424 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.593000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:00:42.597000 audit[2485]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:00:42.597000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffc2a70acb0 a2=0 a3=7ffc2a70ac9c items=0 ppid=2424 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.597000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:00:42.600000 audit[2487]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:00:42.600000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffe5289a9c0 a2=0 a3=7ffe5289a9ac items=0 ppid=2424 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:00:42.606000 audit[2489]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:00:42.606000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffdc03dc420 a2=0 a3=7ffdc03dc40c items=0 ppid=2424 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.606000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:00:42.607614 kubelet[2424]: I0209 19:00:42.607396 2424 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:00:42.607842 kubelet[2424]: I0209 19:00:42.607830 2424 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:00:42.608153 kubelet[2424]: I0209 19:00:42.608142 2424 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:00:42.608332 kubelet[2424]: E0209 19:00:42.608296 2424 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:00:42.609224 kubelet[2424]: W0209 19:00:42.609156 2424 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.19.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:42.609365 kubelet[2424]: E0209 19:00:42.609355 2424 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:42.610000 audit[2490]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:00:42.610000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8ae0da30 a2=0 a3=7ffe8ae0da1c items=0 ppid=2424 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.610000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:00:42.613000 audit[2491]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:00:42.613000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2a55b5a0 a2=0 a3=7ffe2a55b58c items=0 ppid=2424 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.613000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:00:42.614000 audit[2492]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:00:42.614000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcbb8b7460 a2=0 a3=7ffcbb8b744c items=0 ppid=2424 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:42.614000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:00:42.644989 kubelet[2424]: E0209 19:00:42.644773 2424 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.19.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-7?timeout=10s": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:42.709262 kubelet[2424]: I0209 19:00:42.709221 2424 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:00:42.712229 kubelet[2424]: I0209 19:00:42.712201 2424 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:00:42.714661 kubelet[2424]: I0209 19:00:42.714595 2424 status_manager.go:698] "Failed to get status for pod" podUID=8142ad48c558aaee3b33bf798fd6d7bb pod="kube-system/kube-apiserver-ip-172-31-19-7" err="Get \"https://172.31.19.7:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-19-7\": dial tcp 172.31.19.7:6443: connect: connection refused" Feb 9 19:00:42.714661 kubelet[2424]: I0209 19:00:42.714661 2424 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:00:42.722173 kubelet[2424]: I0209 19:00:42.722142 2424 status_manager.go:698] "Failed to get status for pod" podUID=97131a277246323da2d27bfec971d228 pod="kube-system/kube-scheduler-ip-172-31-19-7" err="Get \"https://172.31.19.7:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-31-19-7\": dial tcp 172.31.19.7:6443: connect: connection refused" Feb 9 19:00:42.729505 kubelet[2424]: I0209 19:00:42.729477 2424 status_manager.go:698] "Failed to get status for pod" podUID=3eceb6257295f027c6bd389bcf1d232f pod="kube-system/kube-controller-manager-ip-172-31-19-7" err="Get \"https://172.31.19.7:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-31-19-7\": dial tcp 172.31.19.7:6443: connect: connection refused" Feb 9 19:00:42.748357 kubelet[2424]: I0209 19:00:42.748333 2424 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-7" Feb 9 19:00:42.748978 kubelet[2424]: E0209 19:00:42.748955 2424 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.19.7:6443/api/v1/nodes\": dial tcp 172.31.19.7:6443: connect: connection refused" node="ip-172-31-19-7" Feb 9 19:00:42.845480 kubelet[2424]: I0209 19:00:42.845368 2424 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3eceb6257295f027c6bd389bcf1d232f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-7\" (UID: \"3eceb6257295f027c6bd389bcf1d232f\") " pod="kube-system/kube-controller-manager-ip-172-31-19-7" Feb 9 19:00:42.845835 kubelet[2424]: I0209 19:00:42.845500 2424 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8142ad48c558aaee3b33bf798fd6d7bb-ca-certs\") pod \"kube-apiserver-ip-172-31-19-7\" (UID: \"8142ad48c558aaee3b33bf798fd6d7bb\") " pod="kube-system/kube-apiserver-ip-172-31-19-7" Feb 9 19:00:42.845835 kubelet[2424]: I0209 19:00:42.845639 2424 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8142ad48c558aaee3b33bf798fd6d7bb-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-7\" (UID: \"8142ad48c558aaee3b33bf798fd6d7bb\") " pod="kube-system/kube-apiserver-ip-172-31-19-7" Feb 9 19:00:42.845835 kubelet[2424]: I0209 19:00:42.845684 2424 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8142ad48c558aaee3b33bf798fd6d7bb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-7\" (UID: \"8142ad48c558aaee3b33bf798fd6d7bb\") " pod="kube-system/kube-apiserver-ip-172-31-19-7" Feb 9 19:00:42.845835 kubelet[2424]: I0209 19:00:42.845719 2424 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3eceb6257295f027c6bd389bcf1d232f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-7\" (UID: \"3eceb6257295f027c6bd389bcf1d232f\") " pod="kube-system/kube-controller-manager-ip-172-31-19-7" Feb 9 19:00:42.845835 kubelet[2424]: I0209 19:00:42.845754 2424 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3eceb6257295f027c6bd389bcf1d232f-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-7\" (UID: \"3eceb6257295f027c6bd389bcf1d232f\") " pod="kube-system/kube-controller-manager-ip-172-31-19-7" Feb 9 19:00:42.846063 kubelet[2424]: I0209 19:00:42.845793 2424 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3eceb6257295f027c6bd389bcf1d232f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-7\" (UID: \"3eceb6257295f027c6bd389bcf1d232f\") " pod="kube-system/kube-controller-manager-ip-172-31-19-7" Feb 9 19:00:42.846063 kubelet[2424]: I0209 19:00:42.845827 2424 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3eceb6257295f027c6bd389bcf1d232f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-7\" (UID: \"3eceb6257295f027c6bd389bcf1d232f\") " pod="kube-system/kube-controller-manager-ip-172-31-19-7" Feb 9 19:00:42.846063 kubelet[2424]: I0209 19:00:42.845859 2424 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97131a277246323da2d27bfec971d228-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-7\" (UID: \"97131a277246323da2d27bfec971d228\") " pod="kube-system/kube-scheduler-ip-172-31-19-7" Feb 9 19:00:43.021570 env[1709]: time="2024-02-09T19:00:43.019783093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-7,Uid:8142ad48c558aaee3b33bf798fd6d7bb,Namespace:kube-system,Attempt:0,}" Feb 9 19:00:43.027127 env[1709]: time="2024-02-09T19:00:43.027083571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-7,Uid:97131a277246323da2d27bfec971d228,Namespace:kube-system,Attempt:0,}" Feb 9 19:00:43.030235 env[1709]: time="2024-02-09T19:00:43.030196459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-7,Uid:3eceb6257295f027c6bd389bcf1d232f,Namespace:kube-system,Attempt:0,}" Feb 9 19:00:43.046206 kubelet[2424]: E0209 19:00:43.046114 2424 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.19.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-7?timeout=10s": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:43.151128 kubelet[2424]: I0209 19:00:43.151097 2424 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-7" Feb 9 19:00:43.152658 kubelet[2424]: E0209 19:00:43.152620 2424 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.19.7:6443/api/v1/nodes\": dial tcp 172.31.19.7:6443: connect: connection refused" node="ip-172-31-19-7" Feb 9 19:00:43.259783 kubelet[2424]: W0209 19:00:43.259730 2424 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.19.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:43.259783 kubelet[2424]: E0209 19:00:43.259784 2424 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:43.385825 kubelet[2424]: W0209 19:00:43.385705 2424 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.19.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-7&limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:43.385825 kubelet[2424]: E0209 19:00:43.385764 2424 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-7&limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:43.466814 kubelet[2424]: W0209 19:00:43.466773 2424 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.19.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:43.466814 kubelet[2424]: E0209 19:00:43.466819 2424 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:43.540010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12963930.mount: Deactivated successfully. Feb 9 19:00:43.551709 env[1709]: time="2024-02-09T19:00:43.551658487Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:43.554147 env[1709]: time="2024-02-09T19:00:43.553964370Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:43.560342 env[1709]: time="2024-02-09T19:00:43.560290332Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:43.561390 env[1709]: time="2024-02-09T19:00:43.561353576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:43.563461 env[1709]: time="2024-02-09T19:00:43.563424130Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:43.567913 env[1709]: time="2024-02-09T19:00:43.567872256Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:43.570130 env[1709]: time="2024-02-09T19:00:43.570086555Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:43.571045 env[1709]: time="2024-02-09T19:00:43.571009304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:43.572058 env[1709]: time="2024-02-09T19:00:43.572022713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:43.572908 env[1709]: time="2024-02-09T19:00:43.572877965Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:43.576180 env[1709]: time="2024-02-09T19:00:43.576122795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:43.577316 env[1709]: time="2024-02-09T19:00:43.577279584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:00:43.660803 env[1709]: time="2024-02-09T19:00:43.660415920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:00:43.661207 env[1709]: time="2024-02-09T19:00:43.661154093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:00:43.661362 env[1709]: time="2024-02-09T19:00:43.661336545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:00:43.661928 env[1709]: time="2024-02-09T19:00:43.661711829Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6f846aeb46a59423ee39392b8d000384db2a2a3ad6805934ff3a48448b8979c pid=2514 runtime=io.containerd.runc.v2 Feb 9 19:00:43.662062 env[1709]: time="2024-02-09T19:00:43.661683572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:00:43.662062 env[1709]: time="2024-02-09T19:00:43.661926549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:00:43.662062 env[1709]: time="2024-02-09T19:00:43.661942336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:00:43.662262 env[1709]: time="2024-02-09T19:00:43.662139566Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ed69e9b8edc6c935e5716ca02618dda1678164e4781c777593a3b9283f8f133 pid=2505 runtime=io.containerd.runc.v2 Feb 9 19:00:43.662355 env[1709]: time="2024-02-09T19:00:43.662303755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:00:43.662439 env[1709]: time="2024-02-09T19:00:43.662346695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:00:43.662439 env[1709]: time="2024-02-09T19:00:43.662362895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:00:43.662663 env[1709]: time="2024-02-09T19:00:43.662539655Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9131dad990590d883a50564154acbb4156a8f79704c42017b8ee01e55bb1478a pid=2526 runtime=io.containerd.runc.v2 Feb 9 19:00:43.846937 kubelet[2424]: E0209 19:00:43.846897 2424 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.19.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-7?timeout=10s": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:43.875269 env[1709]: time="2024-02-09T19:00:43.875210644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-7,Uid:8142ad48c558aaee3b33bf798fd6d7bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ed69e9b8edc6c935e5716ca02618dda1678164e4781c777593a3b9283f8f133\"" Feb 9 19:00:43.881459 env[1709]: time="2024-02-09T19:00:43.880599544Z" level=info msg="CreateContainer within sandbox \"2ed69e9b8edc6c935e5716ca02618dda1678164e4781c777593a3b9283f8f133\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:00:43.886013 env[1709]: time="2024-02-09T19:00:43.885974807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-7,Uid:3eceb6257295f027c6bd389bcf1d232f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9131dad990590d883a50564154acbb4156a8f79704c42017b8ee01e55bb1478a\"" Feb 9 19:00:43.889756 env[1709]: time="2024-02-09T19:00:43.889717682Z" level=info msg="CreateContainer within sandbox \"9131dad990590d883a50564154acbb4156a8f79704c42017b8ee01e55bb1478a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:00:43.890002 env[1709]: time="2024-02-09T19:00:43.889974142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-7,Uid:97131a277246323da2d27bfec971d228,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6f846aeb46a59423ee39392b8d000384db2a2a3ad6805934ff3a48448b8979c\"" Feb 9 19:00:43.894644 env[1709]: time="2024-02-09T19:00:43.894607892Z" level=info msg="CreateContainer within sandbox \"a6f846aeb46a59423ee39392b8d000384db2a2a3ad6805934ff3a48448b8979c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:00:43.920669 env[1709]: time="2024-02-09T19:00:43.920560733Z" level=info msg="CreateContainer within sandbox \"2ed69e9b8edc6c935e5716ca02618dda1678164e4781c777593a3b9283f8f133\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"43defd714b37593a7744570320a712c9cad32126beba87677f4b7b87c7cd8b3f\"" Feb 9 19:00:43.922600 env[1709]: time="2024-02-09T19:00:43.922559165Z" level=info msg="StartContainer for \"43defd714b37593a7744570320a712c9cad32126beba87677f4b7b87c7cd8b3f\"" Feb 9 19:00:43.933793 env[1709]: time="2024-02-09T19:00:43.933738505Z" level=info msg="CreateContainer within sandbox \"9131dad990590d883a50564154acbb4156a8f79704c42017b8ee01e55bb1478a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ccd564d6c54be83aac9b7466291c2a46052112ef463aab9c1ce91877996d64b8\"" Feb 9 19:00:43.936442 env[1709]: time="2024-02-09T19:00:43.936401580Z" level=info msg="StartContainer for \"ccd564d6c54be83aac9b7466291c2a46052112ef463aab9c1ce91877996d64b8\"" Feb 9 19:00:43.948419 kubelet[2424]: W0209 19:00:43.948352 2424 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.19.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:43.948419 kubelet[2424]: E0209 19:00:43.948427 2424 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:43.948743 env[1709]: time="2024-02-09T19:00:43.948674792Z" level=info msg="CreateContainer within sandbox \"a6f846aeb46a59423ee39392b8d000384db2a2a3ad6805934ff3a48448b8979c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7bea63a03135bc8b1e83a103c7af836d9352537c178bc5e8c8367fd925b23701\"" Feb 9 19:00:43.949414 env[1709]: time="2024-02-09T19:00:43.949331437Z" level=info msg="StartContainer for \"7bea63a03135bc8b1e83a103c7af836d9352537c178bc5e8c8367fd925b23701\"" Feb 9 19:00:43.954934 kubelet[2424]: I0209 19:00:43.954382 2424 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-7" Feb 9 19:00:43.954934 kubelet[2424]: E0209 19:00:43.954906 2424 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.19.7:6443/api/v1/nodes\": dial tcp 172.31.19.7:6443: connect: connection refused" node="ip-172-31-19-7" Feb 9 19:00:44.171538 env[1709]: time="2024-02-09T19:00:44.171158467Z" level=info msg="StartContainer for \"ccd564d6c54be83aac9b7466291c2a46052112ef463aab9c1ce91877996d64b8\" returns successfully" Feb 9 19:00:44.171868 env[1709]: time="2024-02-09T19:00:44.171654527Z" level=info msg="StartContainer for \"43defd714b37593a7744570320a712c9cad32126beba87677f4b7b87c7cd8b3f\" returns successfully" Feb 9 19:00:44.178698 env[1709]: time="2024-02-09T19:00:44.178656438Z" level=info msg="StartContainer for \"7bea63a03135bc8b1e83a103c7af836d9352537c178bc5e8c8367fd925b23701\" returns successfully" Feb 9 19:00:44.512419 kubelet[2424]: E0209 19:00:44.512322 2424 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:44.624432 kubelet[2424]: I0209 19:00:44.624405 2424 status_manager.go:698] "Failed to get status for pod" podUID=3eceb6257295f027c6bd389bcf1d232f pod="kube-system/kube-controller-manager-ip-172-31-19-7" err="Get \"https://172.31.19.7:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-31-19-7\": dial tcp 172.31.19.7:6443: connect: connection refused" Feb 9 19:00:44.633269 kubelet[2424]: I0209 19:00:44.632889 2424 status_manager.go:698] "Failed to get status for pod" podUID=8142ad48c558aaee3b33bf798fd6d7bb pod="kube-system/kube-apiserver-ip-172-31-19-7" err="Get \"https://172.31.19.7:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-19-7\": dial tcp 172.31.19.7:6443: connect: connection refused" Feb 9 19:00:44.636772 kubelet[2424]: I0209 19:00:44.636750 2424 status_manager.go:698] "Failed to get status for pod" podUID=97131a277246323da2d27bfec971d228 pod="kube-system/kube-scheduler-ip-172-31-19-7" err="Get \"https://172.31.19.7:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-31-19-7\": dial tcp 172.31.19.7:6443: connect: connection refused" Feb 9 19:00:45.448143 kubelet[2424]: E0209 19:00:45.448103 2424 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://172.31.19.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-7?timeout=10s": dial tcp 172.31.19.7:6443: connect: connection refused Feb 9 19:00:45.556761 kubelet[2424]: I0209 19:00:45.556737 2424 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-7" Feb 9 19:00:45.557109 kubelet[2424]: E0209 19:00:45.557091 2424 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.19.7:6443/api/v1/nodes\": dial tcp 172.31.19.7:6443: connect: connection refused" node="ip-172-31-19-7" Feb 9 19:00:48.372281 kubelet[2424]: E0209 19:00:48.372166 2424 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-19-7.17b246f8a2b218a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-19-7", UID:"ip-172-31-19-7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-19-7"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 422253730, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 422253730, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:00:48.416274 kubelet[2424]: I0209 19:00:48.416232 2424 apiserver.go:52] "Watching apiserver" Feb 9 19:00:48.430007 kubelet[2424]: E0209 19:00:48.429904 2424 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-19-7.17b246f8a313dfcd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-19-7", UID:"ip-172-31-19-7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-19-7"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 428661709, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 428661709, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:00:48.442023 kubelet[2424]: I0209 19:00:48.441922 2424 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:00:48.488189 kubelet[2424]: E0209 19:00:48.488098 2424 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-19-7.17b246f8a87f45f6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-19-7", UID:"ip-172-31-19-7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ip-172-31-19-7 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-19-7"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 519586294, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 519586294, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:00:48.495275 kubelet[2424]: I0209 19:00:48.495230 2424 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:00:48.546842 kubelet[2424]: E0209 19:00:48.546753 2424 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-19-7.17b246f8a87f6fc0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-19-7", UID:"ip-172-31-19-7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ip-172-31-19-7 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-19-7"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 519596992, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 519596992, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:00:48.603333 kubelet[2424]: E0209 19:00:48.603227 2424 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-19-7.17b246f8a87f84f4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-19-7", UID:"ip-172-31-19-7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ip-172-31-19-7 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-19-7"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 519602420, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 519602420, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:00:48.650380 kubelet[2424]: E0209 19:00:48.650268 2424 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-19-7" not found Feb 9 19:00:48.655988 kubelet[2424]: E0209 19:00:48.655954 2424 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-7\" not found" node="ip-172-31-19-7" Feb 9 19:00:48.660594 kubelet[2424]: E0209 19:00:48.660489 2424 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-19-7.17b246f8a87f45f6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-19-7", UID:"ip-172-31-19-7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ip-172-31-19-7 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-19-7"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 519586294, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 544057283, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:00:48.717806 kubelet[2424]: E0209 19:00:48.717676 2424 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-19-7.17b246f8a87f6fc0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-19-7", UID:"ip-172-31-19-7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ip-172-31-19-7 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-19-7"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 519596992, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 544108740, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:00:48.759219 kubelet[2424]: I0209 19:00:48.759183 2424 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-7" Feb 9 19:00:48.774977 kubelet[2424]: E0209 19:00:48.774782 2424 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-19-7.17b246f8a87f84f4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-19-7", UID:"ip-172-31-19-7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ip-172-31-19-7 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-19-7"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 519602420, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 544115219, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:00:48.829935 kubelet[2424]: E0209 19:00:48.829827 2424 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-19-7.17b246f8aa9c9e6d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-19-7", UID:"ip-172-31-19-7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-19-7"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 555063917, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 555063917, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:00:49.030344 kubelet[2424]: E0209 19:00:49.030240 2424 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-19-7.17b246f8a87f45f6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-19-7", UID:"ip-172-31-19-7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ip-172-31-19-7 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-19-7"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 519586294, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 712108219, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:00:49.047714 kubelet[2424]: I0209 19:00:49.047683 2424 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-19-7" Feb 9 19:00:49.434295 kubelet[2424]: E0209 19:00:49.431387 2424 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-19-7.17b246f8a87f6fc0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-19-7", UID:"ip-172-31-19-7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ip-172-31-19-7 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-19-7"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 519596992, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 0, 42, 712119167, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:00:49.508984 update_engine[1701]: I0209 19:00:49.508584 1701 update_attempter.cc:509] Updating boot flags... Feb 9 19:00:51.292288 systemd[1]: Reloading. Feb 9 19:00:51.488404 /usr/lib/systemd/system-generators/torcx-generator[2845]: time="2024-02-09T19:00:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:00:51.488444 /usr/lib/systemd/system-generators/torcx-generator[2845]: time="2024-02-09T19:00:51Z" level=info msg="torcx already run" Feb 9 19:00:51.635938 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:00:51.635962 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:00:51.664900 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:00:51.805868 kubelet[2424]: I0209 19:00:51.805219 2424 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:00:51.811658 systemd[1]: Stopping kubelet.service... Feb 9 19:00:51.826206 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:00:51.826855 systemd[1]: Stopped kubelet.service. Feb 9 19:00:51.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:51.828234 kernel: kauditd_printk_skb: 104 callbacks suppressed Feb 9 19:00:51.828315 kernel: audit: type=1131 audit(1707505251.825:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:51.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:51.839643 systemd[1]: Started kubelet.service. Feb 9 19:00:51.850723 kernel: audit: type=1130 audit(1707505251.838:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:51.984456 kubelet[2906]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:00:51.984850 kubelet[2906]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:00:51.985007 kubelet[2906]: I0209 19:00:51.984983 2906 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:00:51.986537 kubelet[2906]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:00:51.986652 kubelet[2906]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:00:51.990298 kubelet[2906]: I0209 19:00:51.990279 2906 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:00:51.990416 kubelet[2906]: I0209 19:00:51.990408 2906 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:00:51.990743 kubelet[2906]: I0209 19:00:51.990731 2906 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:00:51.992252 kubelet[2906]: I0209 19:00:51.992236 2906 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:00:51.996438 kubelet[2906]: I0209 19:00:51.996372 2906 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:00:51.996951 kubelet[2906]: I0209 19:00:51.996930 2906 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:00:51.997438 kubelet[2906]: I0209 19:00:51.997418 2906 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:00:51.997588 kubelet[2906]: I0209 19:00:51.997571 2906 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:00:51.997752 kubelet[2906]: I0209 19:00:51.997604 2906 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:00:51.997752 kubelet[2906]: I0209 19:00:51.997621 2906 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:00:51.997752 kubelet[2906]: I0209 19:00:51.997668 2906 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:00:52.009940 kubelet[2906]: I0209 19:00:52.009916 2906 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:00:52.018953 kubelet[2906]: I0209 19:00:52.009941 2906 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:00:52.021079 kubelet[2906]: I0209 19:00:52.021046 2906 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:00:52.021476 kubelet[2906]: I0209 19:00:52.021460 2906 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:00:52.039590 kubelet[2906]: I0209 19:00:52.039476 2906 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:00:52.046318 kubelet[2906]: I0209 19:00:52.040556 2906 server.go:1186] "Started kubelet" Feb 9 19:00:52.065566 kernel: audit: type=1400 audit(1707505252.048:225): avc: denied { mac_admin } for pid=2906 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:00:52.066096 kernel: audit: type=1401 audit(1707505252.048:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:00:52.048000 audit[2906]: AVC avc: denied { mac_admin } for pid=2906 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:00:52.048000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:00:52.066505 kubelet[2906]: I0209 19:00:52.050610 2906 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:00:52.066505 kubelet[2906]: I0209 19:00:52.050741 2906 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:00:52.066505 kubelet[2906]: I0209 19:00:52.050776 2906 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:00:52.066505 kubelet[2906]: I0209 19:00:52.053910 2906 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:00:52.066505 kubelet[2906]: I0209 19:00:52.064292 2906 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:00:52.080020 kernel: audit: type=1300 audit(1707505252.048:225): arch=c000003e syscall=188 success=no exit=-22 a0=c0002f3ef0 a1=c000a7e018 a2=c0002f3e90 a3=25 items=0 ppid=1 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:52.048000 audit[2906]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0002f3ef0 a1=c000a7e018 a2=c0002f3e90 a3=25 items=0 ppid=1 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:52.080865 kubelet[2906]: I0209 19:00:52.071589 2906 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:00:52.080865 kubelet[2906]: I0209 19:00:52.073371 2906 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:00:52.048000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:00:52.087585 kernel: audit: type=1327 audit(1707505252.048:225): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:00:52.087793 kubelet[2906]: E0209 19:00:52.087776 2906 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:00:52.087946 kubelet[2906]: E0209 19:00:52.087934 2906 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:00:52.049000 audit[2906]: AVC avc: denied { mac_admin } for pid=2906 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:00:52.105588 kernel: audit: type=1400 audit(1707505252.049:226): avc: denied { mac_admin } for pid=2906 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:00:52.049000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:00:52.118527 kernel: audit: type=1401 audit(1707505252.049:226): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:00:52.128546 kernel: audit: type=1300 audit(1707505252.049:226): arch=c000003e syscall=188 success=no exit=-22 a0=c000bfea40 a1=c000a7e030 a2=c0002f3fb0 a3=25 items=0 ppid=1 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:52.049000 audit[2906]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bfea40 a1=c000a7e030 a2=c0002f3fb0 a3=25 items=0 ppid=1 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:52.049000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:00:52.138577 kernel: audit: type=1327 audit(1707505252.049:226): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:00:52.181755 kubelet[2906]: I0209 19:00:52.181732 2906 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-7" Feb 9 19:00:52.197046 kubelet[2906]: I0209 19:00:52.197002 2906 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-19-7" Feb 9 19:00:52.197310 kubelet[2906]: I0209 19:00:52.197297 2906 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-19-7" Feb 9 19:00:52.319766 kubelet[2906]: I0209 19:00:52.319735 2906 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:00:52.362389 kubelet[2906]: I0209 19:00:52.362366 2906 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:00:52.362588 kubelet[2906]: I0209 19:00:52.362577 2906 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:00:52.362678 kubelet[2906]: I0209 19:00:52.362670 2906 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:00:52.362901 kubelet[2906]: I0209 19:00:52.362891 2906 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:00:52.363018 kubelet[2906]: I0209 19:00:52.363009 2906 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:00:52.363245 kubelet[2906]: I0209 19:00:52.363228 2906 policy_none.go:49] "None policy: Start" Feb 9 19:00:52.364134 kubelet[2906]: I0209 19:00:52.364120 2906 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:00:52.364811 kubelet[2906]: I0209 19:00:52.364799 2906 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:00:52.365153 kubelet[2906]: I0209 19:00:52.365143 2906 state_mem.go:75] "Updated machine memory state" Feb 9 19:00:52.367363 kubelet[2906]: I0209 19:00:52.367349 2906 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:00:52.365000 audit[2906]: AVC avc: denied { mac_admin } for pid=2906 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:00:52.365000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:00:52.365000 audit[2906]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00108d6e0 a1=c001084ff0 a2=c00108d6b0 a3=25 items=0 ppid=1 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:52.365000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:00:52.368944 kubelet[2906]: I0209 19:00:52.368926 2906 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:00:52.371896 kubelet[2906]: I0209 19:00:52.371879 2906 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:00:52.412318 kubelet[2906]: I0209 19:00:52.412292 2906 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:00:52.413219 kubelet[2906]: I0209 19:00:52.413202 2906 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:00:52.414364 kubelet[2906]: I0209 19:00:52.414310 2906 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:00:52.415790 kubelet[2906]: E0209 19:00:52.415734 2906 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:00:52.526398 kubelet[2906]: I0209 19:00:52.526362 2906 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:00:52.526639 kubelet[2906]: I0209 19:00:52.526627 2906 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:00:52.526738 kubelet[2906]: I0209 19:00:52.526730 2906 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:00:52.599819 kubelet[2906]: I0209 19:00:52.598755 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97131a277246323da2d27bfec971d228-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-7\" (UID: \"97131a277246323da2d27bfec971d228\") " pod="kube-system/kube-scheduler-ip-172-31-19-7" Feb 9 19:00:52.600357 kubelet[2906]: I0209 19:00:52.600334 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8142ad48c558aaee3b33bf798fd6d7bb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-7\" (UID: \"8142ad48c558aaee3b33bf798fd6d7bb\") " pod="kube-system/kube-apiserver-ip-172-31-19-7" Feb 9 19:00:52.602612 kubelet[2906]: I0209 19:00:52.602586 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3eceb6257295f027c6bd389bcf1d232f-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-7\" (UID: \"3eceb6257295f027c6bd389bcf1d232f\") " pod="kube-system/kube-controller-manager-ip-172-31-19-7" Feb 9 19:00:52.602947 kubelet[2906]: I0209 19:00:52.602919 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3eceb6257295f027c6bd389bcf1d232f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-7\" (UID: \"3eceb6257295f027c6bd389bcf1d232f\") " pod="kube-system/kube-controller-manager-ip-172-31-19-7" Feb 9 19:00:52.603491 kubelet[2906]: I0209 19:00:52.603467 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3eceb6257295f027c6bd389bcf1d232f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-7\" (UID: \"3eceb6257295f027c6bd389bcf1d232f\") " pod="kube-system/kube-controller-manager-ip-172-31-19-7" Feb 9 19:00:52.603699 kubelet[2906]: I0209 19:00:52.603688 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8142ad48c558aaee3b33bf798fd6d7bb-ca-certs\") pod \"kube-apiserver-ip-172-31-19-7\" (UID: \"8142ad48c558aaee3b33bf798fd6d7bb\") " pod="kube-system/kube-apiserver-ip-172-31-19-7" Feb 9 19:00:52.603868 kubelet[2906]: I0209 19:00:52.603851 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8142ad48c558aaee3b33bf798fd6d7bb-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-7\" (UID: \"8142ad48c558aaee3b33bf798fd6d7bb\") " pod="kube-system/kube-apiserver-ip-172-31-19-7" Feb 9 19:00:52.604044 kubelet[2906]: I0209 19:00:52.604033 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3eceb6257295f027c6bd389bcf1d232f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-7\" (UID: \"3eceb6257295f027c6bd389bcf1d232f\") " pod="kube-system/kube-controller-manager-ip-172-31-19-7" Feb 9 19:00:52.604201 kubelet[2906]: I0209 19:00:52.604181 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3eceb6257295f027c6bd389bcf1d232f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-7\" (UID: \"3eceb6257295f027c6bd389bcf1d232f\") " pod="kube-system/kube-controller-manager-ip-172-31-19-7" Feb 9 19:00:53.041658 kubelet[2906]: I0209 19:00:53.041614 2906 apiserver.go:52] "Watching apiserver" Feb 9 19:00:53.074397 kubelet[2906]: I0209 19:00:53.074357 2906 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:00:53.112664 kubelet[2906]: I0209 19:00:53.112621 2906 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:00:53.473908 kubelet[2906]: E0209 19:00:53.473802 2906 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-7\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-7" Feb 9 19:00:53.716601 kubelet[2906]: E0209 19:00:53.716566 2906 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-19-7\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-7" Feb 9 19:00:54.034931 kubelet[2906]: E0209 19:00:54.034873 2906 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-19-7\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-19-7" Feb 9 19:00:54.426745 kubelet[2906]: I0209 19:00:54.426644 2906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-7" podStartSLOduration=2.426008677 pod.CreationTimestamp="2024-02-09 19:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:00:53.860540737 +0000 UTC m=+1.999488592" watchObservedRunningTime="2024-02-09 19:00:54.426008677 +0000 UTC m=+2.564956537" Feb 9 19:00:54.838590 kubelet[2906]: I0209 19:00:54.838498 2906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-7" podStartSLOduration=2.838386094 pod.CreationTimestamp="2024-02-09 19:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:00:54.427973201 +0000 UTC m=+2.566921051" watchObservedRunningTime="2024-02-09 19:00:54.838386094 +0000 UTC m=+2.977333953" Feb 9 19:00:56.803052 kubelet[2906]: I0209 19:00:56.803002 2906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-7" podStartSLOduration=4.802948785 pod.CreationTimestamp="2024-02-09 19:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:00:54.83891609 +0000 UTC m=+2.977863949" watchObservedRunningTime="2024-02-09 19:00:56.802948785 +0000 UTC m=+4.941896635" Feb 9 19:00:57.337572 amazon-ssm-agent[1774]: 2024-02-09 19:00:57 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 19:00:58.369531 sudo[2009]: pam_unix(sudo:session): session closed for user root Feb 9 19:00:58.368000 audit[2009]: USER_END pid=2009 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:00:58.376434 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 9 19:00:58.376590 kernel: audit: type=1106 audit(1707505258.368:228): pid=2009 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:00:58.369000 audit[2009]: CRED_DISP pid=2009 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:00:58.381537 kernel: audit: type=1104 audit(1707505258.369:229): pid=2009 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:00:58.403115 sshd[2005]: pam_unix(sshd:session): session closed for user core Feb 9 19:00:58.406000 audit[2005]: USER_END pid=2005 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:00:58.415912 kernel: audit: type=1106 audit(1707505258.406:230): pid=2005 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:00:58.407000 audit[2005]: CRED_DISP pid=2005 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:00:58.421553 kernel: audit: type=1104 audit(1707505258.407:231): pid=2005 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:00:58.424145 systemd[1]: sshd@6-172.31.19.7:22-139.178.68.195:49334.service: Deactivated successfully. Feb 9 19:00:58.427962 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:00:58.429699 systemd-logind[1698]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:00:58.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.19.7:22-139.178.68.195:49334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:58.432472 systemd-logind[1698]: Removed session 7. Feb 9 19:00:58.437762 kernel: audit: type=1131 audit(1707505258.423:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.19.7:22-139.178.68.195:49334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:04.459946 kubelet[2906]: I0209 19:01:04.459918 2906 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:01:04.461061 env[1709]: time="2024-02-09T19:01:04.460991890Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:01:04.461522 kubelet[2906]: I0209 19:01:04.461294 2906 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:01:05.061300 kubelet[2906]: I0209 19:01:05.061227 2906 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:01:05.116322 kubelet[2906]: I0209 19:01:05.116297 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd6j7\" (UniqueName: \"kubernetes.io/projected/78f4d5bb-eced-44c8-9f32-45bac2ed59db-kube-api-access-rd6j7\") pod \"kube-proxy-bnzpn\" (UID: \"78f4d5bb-eced-44c8-9f32-45bac2ed59db\") " pod="kube-system/kube-proxy-bnzpn" Feb 9 19:01:05.116717 kubelet[2906]: I0209 19:01:05.116618 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78f4d5bb-eced-44c8-9f32-45bac2ed59db-xtables-lock\") pod \"kube-proxy-bnzpn\" (UID: \"78f4d5bb-eced-44c8-9f32-45bac2ed59db\") " pod="kube-system/kube-proxy-bnzpn" Feb 9 19:01:05.116875 kubelet[2906]: I0209 19:01:05.116863 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/78f4d5bb-eced-44c8-9f32-45bac2ed59db-kube-proxy\") pod \"kube-proxy-bnzpn\" (UID: \"78f4d5bb-eced-44c8-9f32-45bac2ed59db\") " pod="kube-system/kube-proxy-bnzpn" Feb 9 19:01:05.117104 kubelet[2906]: I0209 19:01:05.117091 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78f4d5bb-eced-44c8-9f32-45bac2ed59db-lib-modules\") pod \"kube-proxy-bnzpn\" (UID: \"78f4d5bb-eced-44c8-9f32-45bac2ed59db\") " pod="kube-system/kube-proxy-bnzpn" Feb 9 19:01:05.210757 kubelet[2906]: I0209 19:01:05.210702 2906 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:01:05.320104 kubelet[2906]: I0209 19:01:05.319951 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85rcw\" (UniqueName: \"kubernetes.io/projected/5a72ba71-fd85-4ef2-8585-852981cfde95-kube-api-access-85rcw\") pod \"tigera-operator-cfc98749c-vsrgm\" (UID: \"5a72ba71-fd85-4ef2-8585-852981cfde95\") " pod="tigera-operator/tigera-operator-cfc98749c-vsrgm" Feb 9 19:01:05.320104 kubelet[2906]: I0209 19:01:05.320015 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a72ba71-fd85-4ef2-8585-852981cfde95-var-lib-calico\") pod \"tigera-operator-cfc98749c-vsrgm\" (UID: \"5a72ba71-fd85-4ef2-8585-852981cfde95\") " pod="tigera-operator/tigera-operator-cfc98749c-vsrgm" Feb 9 19:01:05.374298 env[1709]: time="2024-02-09T19:01:05.374241867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bnzpn,Uid:78f4d5bb-eced-44c8-9f32-45bac2ed59db,Namespace:kube-system,Attempt:0,}" Feb 9 19:01:05.411432 env[1709]: time="2024-02-09T19:01:05.411340027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:01:05.411432 env[1709]: time="2024-02-09T19:01:05.411388028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:01:05.411919 env[1709]: time="2024-02-09T19:01:05.411404153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:01:05.411919 env[1709]: time="2024-02-09T19:01:05.411652185Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3781ffa2ceb4374f8d1abf69abb6eb378257e97aa4a5216ac0a6e2c95ad50038 pid=3016 runtime=io.containerd.runc.v2 Feb 9 19:01:05.497945 env[1709]: time="2024-02-09T19:01:05.497893864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bnzpn,Uid:78f4d5bb-eced-44c8-9f32-45bac2ed59db,Namespace:kube-system,Attempt:0,} returns sandbox id \"3781ffa2ceb4374f8d1abf69abb6eb378257e97aa4a5216ac0a6e2c95ad50038\"" Feb 9 19:01:05.502461 env[1709]: time="2024-02-09T19:01:05.502417455Z" level=info msg="CreateContainer within sandbox \"3781ffa2ceb4374f8d1abf69abb6eb378257e97aa4a5216ac0a6e2c95ad50038\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:01:05.515879 env[1709]: time="2024-02-09T19:01:05.515792616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-vsrgm,Uid:5a72ba71-fd85-4ef2-8585-852981cfde95,Namespace:tigera-operator,Attempt:0,}" Feb 9 19:01:05.533500 env[1709]: time="2024-02-09T19:01:05.533456603Z" level=info msg="CreateContainer within sandbox \"3781ffa2ceb4374f8d1abf69abb6eb378257e97aa4a5216ac0a6e2c95ad50038\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b85a19f73e3f475fa576f03c4d884b1f7d3b53c5de77c96ebcfa91cd188235de\"" Feb 9 19:01:05.536794 env[1709]: time="2024-02-09T19:01:05.534673407Z" level=info msg="StartContainer for \"b85a19f73e3f475fa576f03c4d884b1f7d3b53c5de77c96ebcfa91cd188235de\"" Feb 9 19:01:05.561825 env[1709]: time="2024-02-09T19:01:05.561724710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:01:05.562109 env[1709]: time="2024-02-09T19:01:05.561969158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:01:05.562109 env[1709]: time="2024-02-09T19:01:05.562021388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:01:05.562286 env[1709]: time="2024-02-09T19:01:05.562241039Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/56eed4929461d7f61a278e45ad04401c6b337ada2d96291d8ffcf73614a82888 pid=3066 runtime=io.containerd.runc.v2 Feb 9 19:01:05.687707 env[1709]: time="2024-02-09T19:01:05.687030038Z" level=info msg="StartContainer for \"b85a19f73e3f475fa576f03c4d884b1f7d3b53c5de77c96ebcfa91cd188235de\" returns successfully" Feb 9 19:01:05.704693 env[1709]: time="2024-02-09T19:01:05.704656420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-vsrgm,Uid:5a72ba71-fd85-4ef2-8585-852981cfde95,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"56eed4929461d7f61a278e45ad04401c6b337ada2d96291d8ffcf73614a82888\"" Feb 9 19:01:05.708284 env[1709]: time="2024-02-09T19:01:05.708223041Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 9 19:01:06.091000 audit[3151]: NETFILTER_CFG table=mangle:59 family=2 entries=1 op=nft_register_chain pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.118076 kernel: audit: type=1325 audit(1707505266.091:233): table=mangle:59 family=2 entries=1 op=nft_register_chain pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.118279 kernel: audit: type=1300 audit(1707505266.091:233): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd8d2b060 a2=0 a3=7ffdd8d2b04c items=0 ppid=3099 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.091000 audit[3151]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd8d2b060 a2=0 a3=7ffdd8d2b04c items=0 ppid=3099 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.091000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:01:06.137675 kernel: audit: type=1327 audit(1707505266.091:233): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:01:06.093000 audit[3153]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.154171 kernel: audit: type=1325 audit(1707505266.093:234): table=nat:60 family=2 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.154295 kernel: audit: type=1300 audit(1707505266.093:234): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5c12e490 a2=0 a3=7fff5c12e47c items=0 ppid=3099 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.093000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5c12e490 a2=0 a3=7fff5c12e47c items=0 ppid=3099 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.093000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:01:06.159425 kernel: audit: type=1327 audit(1707505266.093:234): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:01:06.160019 kernel: audit: type=1325 audit(1707505266.095:235): table=filter:61 family=2 entries=1 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.095000 audit[3154]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.095000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec9b53ce0 a2=0 a3=7ffec9b53ccc items=0 ppid=3099 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.171767 kernel: audit: type=1300 audit(1707505266.095:235): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec9b53ce0 a2=0 a3=7ffec9b53ccc items=0 ppid=3099 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.171908 kernel: audit: type=1327 audit(1707505266.095:235): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:01:06.095000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:01:06.128000 audit[3152]: NETFILTER_CFG table=mangle:62 family=10 entries=1 op=nft_register_chain pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.180665 kernel: audit: type=1325 audit(1707505266.128:236): table=mangle:62 family=10 entries=1 op=nft_register_chain pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.128000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8bc5fd10 a2=0 a3=7fff8bc5fcfc items=0 ppid=3099 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.128000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:01:06.142000 audit[3155]: NETFILTER_CFG table=nat:63 family=10 entries=1 op=nft_register_chain pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.142000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6dca0f60 a2=0 a3=7fff6dca0f4c items=0 ppid=3099 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.142000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:01:06.148000 audit[3156]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.148000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca98ddaf0 a2=0 a3=7ffca98ddadc items=0 ppid=3099 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.148000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:01:06.202000 audit[3157]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.202000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff04251bf0 a2=0 a3=7fff04251bdc items=0 ppid=3099 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.202000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:01:06.209000 audit[3159]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.209000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff3f9b7e40 a2=0 a3=7fff3f9b7e2c items=0 ppid=3099 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.209000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 19:01:06.219000 audit[3162]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.219000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc341a84e0 a2=0 a3=7ffc341a84cc items=0 ppid=3099 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 19:01:06.222000 audit[3163]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.222000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc139bc0e0 a2=0 a3=7ffc139bc0cc items=0 ppid=3099 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.222000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:01:06.226000 audit[3165]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.226000 audit[3165]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd7833f110 a2=0 a3=7ffd7833f0fc items=0 ppid=3099 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:01:06.227000 audit[3166]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.227000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1764be20 a2=0 a3=7ffc1764be0c items=0 ppid=3099 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.227000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:01:06.230000 audit[3168]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.230000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcd6123490 a2=0 a3=7ffcd612347c items=0 ppid=3099 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:01:06.235000 audit[3171]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3171 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.235000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff72e9d520 a2=0 a3=7fff72e9d50c items=0 ppid=3099 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.235000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 19:01:06.236000 audit[3172]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.236000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe7927560 a2=0 a3=7fffe792754c items=0 ppid=3099 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.236000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:01:06.240000 audit[3174]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.240000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd8707be50 a2=0 a3=7ffd8707be3c items=0 ppid=3099 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.240000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:01:06.241000 audit[3175]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.241000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0a62f0d0 a2=0 a3=7ffe0a62f0bc items=0 ppid=3099 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.241000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:01:06.245000 audit[3177]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.245000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe2b000a90 a2=0 a3=7ffe2b000a7c items=0 ppid=3099 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.245000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:01:06.263000 audit[3180]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.263000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffce6e7ada0 a2=0 a3=7ffce6e7ad8c items=0 ppid=3099 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.263000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:01:06.269000 audit[3183]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.269000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff77c6f510 a2=0 a3=7fff77c6f4fc items=0 ppid=3099 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.269000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:01:06.270000 audit[3184]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.270000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc4deb99c0 a2=0 a3=7ffc4deb99ac items=0 ppid=3099 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.270000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:01:06.274000 audit[3186]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.274000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd93b07320 a2=0 a3=7ffd93b0730c items=0 ppid=3099 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.274000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:01:06.279000 audit[3189]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:01:06.279000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe2e6489a0 a2=0 a3=7ffe2e64898c items=0 ppid=3099 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.279000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:01:06.302000 audit[3193]: NETFILTER_CFG table=filter:82 family=2 entries=6 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:06.302000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffe3f612230 a2=0 a3=7ffe3f61221c items=0 ppid=3099 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.302000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:06.312000 audit[3193]: NETFILTER_CFG table=nat:83 family=2 entries=17 op=nft_register_chain pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:06.312000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe3f612230 a2=0 a3=7ffe3f61221c items=0 ppid=3099 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.312000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:06.375000 audit[3223]: NETFILTER_CFG table=filter:84 family=2 entries=12 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:06.375000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffed0d4a1e0 a2=0 a3=7ffed0d4a1cc items=0 ppid=3099 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:06.379000 audit[3223]: NETFILTER_CFG table=nat:85 family=2 entries=20 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:06.379000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffed0d4a1e0 a2=0 a3=7ffed0d4a1cc items=0 ppid=3099 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:06.381000 audit[3224]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3224 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.381000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffdc544950 a2=0 a3=7fffdc54493c items=0 ppid=3099 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.381000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:01:06.392000 audit[3226]: NETFILTER_CFG table=filter:87 family=10 entries=2 op=nft_register_chain pid=3226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.392000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcd89efd70 a2=0 a3=7ffcd89efd5c items=0 ppid=3099 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.392000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 19:01:06.399000 audit[3230]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.399000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffff7504270 a2=0 a3=7ffff750425c items=0 ppid=3099 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.399000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 19:01:06.402000 audit[3231]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.402000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff949ff180 a2=0 a3=7fff949ff16c items=0 ppid=3099 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.402000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:01:06.407000 audit[3233]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.407000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffecc568fe0 a2=0 a3=7ffecc568fcc items=0 ppid=3099 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.407000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:01:06.408000 audit[3234]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.408000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdea282df0 a2=0 a3=7ffdea282ddc items=0 ppid=3099 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.408000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:01:06.416000 audit[3236]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.416000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe16a24f50 a2=0 a3=7ffe16a24f3c items=0 ppid=3099 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.416000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 19:01:06.426000 audit[3239]: NETFILTER_CFG table=filter:93 family=10 entries=2 op=nft_register_chain pid=3239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.426000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffffddad4d0 a2=0 a3=7ffffddad4bc items=0 ppid=3099 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.426000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:01:06.438000 audit[3240]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3240 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.438000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe91035330 a2=0 a3=7ffe9103531c items=0 ppid=3099 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.438000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:01:06.450000 audit[3242]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.450000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffff5de84d0 a2=0 a3=7ffff5de84bc items=0 ppid=3099 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.450000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:01:06.454000 audit[3243]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_chain pid=3243 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.454000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc1c6afe0 a2=0 a3=7fffc1c6afcc items=0 ppid=3099 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.454000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:01:06.462000 audit[3245]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.462000 audit[3245]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd8c2096f0 a2=0 a3=7ffd8c2096dc items=0 ppid=3099 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:01:06.469000 audit[3248]: NETFILTER_CFG table=filter:98 family=10 entries=1 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.469000 audit[3248]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffc50aabd0 a2=0 a3=7fffc50aabbc items=0 ppid=3099 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.469000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:01:06.475000 audit[3251]: NETFILTER_CFG table=filter:99 family=10 entries=1 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.475000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd76383ca0 a2=0 a3=7ffd76383c8c items=0 ppid=3099 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 19:01:06.477000 audit[3252]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_chain pid=3252 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.477000 audit[3252]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc6c24c4d0 a2=0 a3=7ffc6c24c4bc items=0 ppid=3099 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.477000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:01:06.481000 audit[3254]: NETFILTER_CFG table=nat:101 family=10 entries=2 op=nft_register_chain pid=3254 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.481000 audit[3254]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc862f7b70 a2=0 a3=7ffc862f7b5c items=0 ppid=3099 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.481000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:01:06.492000 audit[3257]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3257 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:01:06.492000 audit[3257]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd3528f120 a2=0 a3=7ffd3528f10c items=0 ppid=3099 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.492000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:01:06.505956 kubelet[2906]: I0209 19:01:06.505924 2906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bnzpn" podStartSLOduration=1.505846043 pod.CreationTimestamp="2024-02-09 19:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:01:06.505348626 +0000 UTC m=+14.644296488" watchObservedRunningTime="2024-02-09 19:01:06.505846043 +0000 UTC m=+14.644793915" Feb 9 19:01:06.517000 audit[3261]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:01:06.517000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffec1ba9340 a2=0 a3=7ffec1ba932c items=0 ppid=3099 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.517000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:06.529000 audit[3261]: NETFILTER_CFG table=nat:104 family=10 entries=10 op=nft_register_chain pid=3261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:01:06.529000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffec1ba9340 a2=0 a3=7ffec1ba932c items=0 ppid=3099 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:06.529000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:06.889261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4114443675.mount: Deactivated successfully. Feb 9 19:01:08.351312 env[1709]: time="2024-02-09T19:01:08.351258932Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:08.357461 env[1709]: time="2024-02-09T19:01:08.357406804Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:08.360324 env[1709]: time="2024-02-09T19:01:08.360235724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:08.362880 env[1709]: time="2024-02-09T19:01:08.362839326Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:08.363533 env[1709]: time="2024-02-09T19:01:08.363475637Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827\"" Feb 9 19:01:08.368084 env[1709]: time="2024-02-09T19:01:08.368044182Z" level=info msg="CreateContainer within sandbox \"56eed4929461d7f61a278e45ad04401c6b337ada2d96291d8ffcf73614a82888\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 9 19:01:08.398463 env[1709]: time="2024-02-09T19:01:08.398416043Z" level=info msg="CreateContainer within sandbox \"56eed4929461d7f61a278e45ad04401c6b337ada2d96291d8ffcf73614a82888\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0ff46160ec7fed4f89ea4f88a6253050b3b53a9753cec5b566b4c2c6c2b80c86\"" Feb 9 19:01:08.403358 env[1709]: time="2024-02-09T19:01:08.399860403Z" level=info msg="StartContainer for \"0ff46160ec7fed4f89ea4f88a6253050b3b53a9753cec5b566b4c2c6c2b80c86\"" Feb 9 19:01:08.527555 env[1709]: time="2024-02-09T19:01:08.526936703Z" level=info msg="StartContainer for \"0ff46160ec7fed4f89ea4f88a6253050b3b53a9753cec5b566b4c2c6c2b80c86\" returns successfully" Feb 9 19:01:09.386340 systemd[1]: run-containerd-runc-k8s.io-0ff46160ec7fed4f89ea4f88a6253050b3b53a9753cec5b566b4c2c6c2b80c86-runc.XBxGCU.mount: Deactivated successfully. Feb 9 19:01:09.552563 kubelet[2906]: I0209 19:01:09.551756 2906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-vsrgm" podStartSLOduration=-9.223372032303099e+09 pod.CreationTimestamp="2024-02-09 19:01:05 +0000 UTC" firstStartedPulling="2024-02-09 19:01:05.705946857 +0000 UTC m=+13.844894697" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:01:09.550840948 +0000 UTC m=+17.689788819" watchObservedRunningTime="2024-02-09 19:01:09.551677441 +0000 UTC m=+17.690625334" Feb 9 19:01:10.975000 audit[3325]: NETFILTER_CFG table=filter:105 family=2 entries=13 op=nft_register_rule pid=3325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:10.975000 audit[3325]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffc0b0435d0 a2=0 a3=7ffc0b0435bc items=0 ppid=3099 pid=3325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:10.975000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:10.977000 audit[3325]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=3325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:10.977000 audit[3325]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffc0b0435d0 a2=0 a3=7ffc0b0435bc items=0 ppid=3099 pid=3325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:10.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:11.056000 audit[3351]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=3351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:11.056000 audit[3351]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffff1939800 a2=0 a3=7ffff19397ec items=0 ppid=3099 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:11.056000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:11.057000 audit[3351]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=3351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:11.057000 audit[3351]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffff1939800 a2=0 a3=7ffff19397ec items=0 ppid=3099 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:11.057000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:11.107261 kubelet[2906]: I0209 19:01:11.107232 2906 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:01:11.177339 kubelet[2906]: I0209 19:01:11.177307 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bc2d0795-a4fa-4f48-a139-6fdbafdd0f50-typha-certs\") pod \"calico-typha-58c6cc75fd-sjc7l\" (UID: \"bc2d0795-a4fa-4f48-a139-6fdbafdd0f50\") " pod="calico-system/calico-typha-58c6cc75fd-sjc7l" Feb 9 19:01:11.177877 kubelet[2906]: I0209 19:01:11.177857 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29hkh\" (UniqueName: \"kubernetes.io/projected/bc2d0795-a4fa-4f48-a139-6fdbafdd0f50-kube-api-access-29hkh\") pod \"calico-typha-58c6cc75fd-sjc7l\" (UID: \"bc2d0795-a4fa-4f48-a139-6fdbafdd0f50\") " pod="calico-system/calico-typha-58c6cc75fd-sjc7l" Feb 9 19:01:11.178080 kubelet[2906]: I0209 19:01:11.178055 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc2d0795-a4fa-4f48-a139-6fdbafdd0f50-tigera-ca-bundle\") pod \"calico-typha-58c6cc75fd-sjc7l\" (UID: \"bc2d0795-a4fa-4f48-a139-6fdbafdd0f50\") " pod="calico-system/calico-typha-58c6cc75fd-sjc7l" Feb 9 19:01:11.264644 kubelet[2906]: I0209 19:01:11.264506 2906 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:01:11.378987 kubelet[2906]: I0209 19:01:11.378858 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/61fe811e-87e0-42cd-9431-581a3031da51-cni-net-dir\") pod \"calico-node-dcqp7\" (UID: \"61fe811e-87e0-42cd-9431-581a3031da51\") " pod="calico-system/calico-node-dcqp7" Feb 9 19:01:11.379168 kubelet[2906]: I0209 19:01:11.379006 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61fe811e-87e0-42cd-9431-581a3031da51-lib-modules\") pod \"calico-node-dcqp7\" (UID: \"61fe811e-87e0-42cd-9431-581a3031da51\") " pod="calico-system/calico-node-dcqp7" Feb 9 19:01:11.379168 kubelet[2906]: I0209 19:01:11.379044 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61fe811e-87e0-42cd-9431-581a3031da51-xtables-lock\") pod \"calico-node-dcqp7\" (UID: \"61fe811e-87e0-42cd-9431-581a3031da51\") " pod="calico-system/calico-node-dcqp7" Feb 9 19:01:11.379168 kubelet[2906]: I0209 19:01:11.379076 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/61fe811e-87e0-42cd-9431-581a3031da51-cni-bin-dir\") pod \"calico-node-dcqp7\" (UID: \"61fe811e-87e0-42cd-9431-581a3031da51\") " pod="calico-system/calico-node-dcqp7" Feb 9 19:01:11.379168 kubelet[2906]: I0209 19:01:11.379105 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/61fe811e-87e0-42cd-9431-581a3031da51-var-run-calico\") pod \"calico-node-dcqp7\" (UID: \"61fe811e-87e0-42cd-9431-581a3031da51\") " pod="calico-system/calico-node-dcqp7" Feb 9 19:01:11.379168 kubelet[2906]: I0209 19:01:11.379134 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/61fe811e-87e0-42cd-9431-581a3031da51-policysync\") pod \"calico-node-dcqp7\" (UID: \"61fe811e-87e0-42cd-9431-581a3031da51\") " pod="calico-system/calico-node-dcqp7" Feb 9 19:01:11.379449 kubelet[2906]: I0209 19:01:11.379162 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61fe811e-87e0-42cd-9431-581a3031da51-tigera-ca-bundle\") pod \"calico-node-dcqp7\" (UID: \"61fe811e-87e0-42cd-9431-581a3031da51\") " pod="calico-system/calico-node-dcqp7" Feb 9 19:01:11.379449 kubelet[2906]: I0209 19:01:11.379195 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/61fe811e-87e0-42cd-9431-581a3031da51-node-certs\") pod \"calico-node-dcqp7\" (UID: \"61fe811e-87e0-42cd-9431-581a3031da51\") " pod="calico-system/calico-node-dcqp7" Feb 9 19:01:11.379449 kubelet[2906]: I0209 19:01:11.379230 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/61fe811e-87e0-42cd-9431-581a3031da51-var-lib-calico\") pod \"calico-node-dcqp7\" (UID: \"61fe811e-87e0-42cd-9431-581a3031da51\") " pod="calico-system/calico-node-dcqp7" Feb 9 19:01:11.379449 kubelet[2906]: I0209 19:01:11.379263 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/61fe811e-87e0-42cd-9431-581a3031da51-flexvol-driver-host\") pod \"calico-node-dcqp7\" (UID: \"61fe811e-87e0-42cd-9431-581a3031da51\") " pod="calico-system/calico-node-dcqp7" Feb 9 19:01:11.379449 kubelet[2906]: I0209 19:01:11.379294 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/61fe811e-87e0-42cd-9431-581a3031da51-cni-log-dir\") pod \"calico-node-dcqp7\" (UID: \"61fe811e-87e0-42cd-9431-581a3031da51\") " pod="calico-system/calico-node-dcqp7" Feb 9 19:01:11.379684 kubelet[2906]: I0209 19:01:11.379327 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmd9c\" (UniqueName: \"kubernetes.io/projected/61fe811e-87e0-42cd-9431-581a3031da51-kube-api-access-dmd9c\") pod \"calico-node-dcqp7\" (UID: \"61fe811e-87e0-42cd-9431-581a3031da51\") " pod="calico-system/calico-node-dcqp7" Feb 9 19:01:11.416708 kubelet[2906]: I0209 19:01:11.416672 2906 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:01:11.417184 kubelet[2906]: E0209 19:01:11.417164 2906 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sfvlp" podUID=196d1d2d-b701-4a86-946e-fecee4636cf4 Feb 9 19:01:11.419357 env[1709]: time="2024-02-09T19:01:11.419252625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c6cc75fd-sjc7l,Uid:bc2d0795-a4fa-4f48-a139-6fdbafdd0f50,Namespace:calico-system,Attempt:0,}" Feb 9 19:01:11.471904 env[1709]: time="2024-02-09T19:01:11.471811677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:01:11.472081 env[1709]: time="2024-02-09T19:01:11.471928220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:01:11.472081 env[1709]: time="2024-02-09T19:01:11.471959418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:01:11.472404 env[1709]: time="2024-02-09T19:01:11.472288269Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c63943cf1a0f8e23b9e8c4144b83b26c0e9862cf8ad8a5c1dfce962a51d2b50a pid=3361 runtime=io.containerd.runc.v2 Feb 9 19:01:11.483556 kubelet[2906]: I0209 19:01:11.482165 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/196d1d2d-b701-4a86-946e-fecee4636cf4-registration-dir\") pod \"csi-node-driver-sfvlp\" (UID: \"196d1d2d-b701-4a86-946e-fecee4636cf4\") " pod="calico-system/csi-node-driver-sfvlp" Feb 9 19:01:11.483556 kubelet[2906]: I0209 19:01:11.482259 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/196d1d2d-b701-4a86-946e-fecee4636cf4-varrun\") pod \"csi-node-driver-sfvlp\" (UID: \"196d1d2d-b701-4a86-946e-fecee4636cf4\") " pod="calico-system/csi-node-driver-sfvlp" Feb 9 19:01:11.483556 kubelet[2906]: I0209 19:01:11.482388 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/196d1d2d-b701-4a86-946e-fecee4636cf4-kubelet-dir\") pod \"csi-node-driver-sfvlp\" (UID: \"196d1d2d-b701-4a86-946e-fecee4636cf4\") " pod="calico-system/csi-node-driver-sfvlp" Feb 9 19:01:11.483556 kubelet[2906]: I0209 19:01:11.482578 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/196d1d2d-b701-4a86-946e-fecee4636cf4-socket-dir\") pod \"csi-node-driver-sfvlp\" (UID: \"196d1d2d-b701-4a86-946e-fecee4636cf4\") " pod="calico-system/csi-node-driver-sfvlp" Feb 9 19:01:11.483556 kubelet[2906]: I0209 19:01:11.482635 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2fdl\" (UniqueName: \"kubernetes.io/projected/196d1d2d-b701-4a86-946e-fecee4636cf4-kube-api-access-z2fdl\") pod \"csi-node-driver-sfvlp\" (UID: \"196d1d2d-b701-4a86-946e-fecee4636cf4\") " pod="calico-system/csi-node-driver-sfvlp" Feb 9 19:01:11.509379 kubelet[2906]: E0209 19:01:11.509290 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.509379 kubelet[2906]: W0209 19:01:11.509332 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.509797 kubelet[2906]: E0209 19:01:11.509676 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.512000 kubelet[2906]: E0209 19:01:11.511974 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.512000 kubelet[2906]: W0209 19:01:11.511995 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.512210 kubelet[2906]: E0209 19:01:11.512023 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.512302 kubelet[2906]: E0209 19:01:11.512288 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.512377 kubelet[2906]: W0209 19:01:11.512303 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.512430 kubelet[2906]: E0209 19:01:11.512389 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.512559 kubelet[2906]: E0209 19:01:11.512547 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.512640 kubelet[2906]: W0209 19:01:11.512560 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.512713 kubelet[2906]: E0209 19:01:11.512651 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.512793 kubelet[2906]: E0209 19:01:11.512780 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.512856 kubelet[2906]: W0209 19:01:11.512794 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.512856 kubelet[2906]: E0209 19:01:11.512813 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.513252 kubelet[2906]: E0209 19:01:11.513235 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.513252 kubelet[2906]: W0209 19:01:11.513251 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.513373 kubelet[2906]: E0209 19:01:11.513301 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.513647 kubelet[2906]: E0209 19:01:11.513630 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.513738 kubelet[2906]: W0209 19:01:11.513649 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.513738 kubelet[2906]: E0209 19:01:11.513665 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.583874 kubelet[2906]: E0209 19:01:11.583845 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.583874 kubelet[2906]: W0209 19:01:11.583870 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.584228 kubelet[2906]: E0209 19:01:11.583896 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.584332 kubelet[2906]: E0209 19:01:11.584314 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.584462 kubelet[2906]: W0209 19:01:11.584332 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.584462 kubelet[2906]: E0209 19:01:11.584357 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.585169 kubelet[2906]: E0209 19:01:11.585115 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.585169 kubelet[2906]: W0209 19:01:11.585160 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.585883 kubelet[2906]: E0209 19:01:11.585185 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.585989 kubelet[2906]: E0209 19:01:11.585966 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.585989 kubelet[2906]: W0209 19:01:11.585978 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.586215 kubelet[2906]: E0209 19:01:11.586126 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.586793 kubelet[2906]: E0209 19:01:11.586703 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.586793 kubelet[2906]: W0209 19:01:11.586750 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.586948 kubelet[2906]: E0209 19:01:11.586855 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.587005 kubelet[2906]: E0209 19:01:11.586995 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.587057 kubelet[2906]: W0209 19:01:11.587004 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.587103 kubelet[2906]: E0209 19:01:11.587090 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.587258 kubelet[2906]: E0209 19:01:11.587214 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.587258 kubelet[2906]: W0209 19:01:11.587224 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.587393 kubelet[2906]: E0209 19:01:11.587305 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.587447 kubelet[2906]: E0209 19:01:11.587432 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.587447 kubelet[2906]: W0209 19:01:11.587441 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.587558 kubelet[2906]: E0209 19:01:11.587550 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.588544 kubelet[2906]: E0209 19:01:11.587800 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.588544 kubelet[2906]: W0209 19:01:11.587813 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.588544 kubelet[2906]: E0209 19:01:11.587832 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.588544 kubelet[2906]: E0209 19:01:11.588042 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.588544 kubelet[2906]: W0209 19:01:11.588052 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.588544 kubelet[2906]: E0209 19:01:11.588070 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.588544 kubelet[2906]: E0209 19:01:11.588330 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.588544 kubelet[2906]: W0209 19:01:11.588339 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.588544 kubelet[2906]: E0209 19:01:11.588419 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.589024 kubelet[2906]: E0209 19:01:11.588575 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.589024 kubelet[2906]: W0209 19:01:11.588584 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.589024 kubelet[2906]: E0209 19:01:11.588669 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.589024 kubelet[2906]: E0209 19:01:11.588792 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.589024 kubelet[2906]: W0209 19:01:11.588799 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.589024 kubelet[2906]: E0209 19:01:11.588894 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.589024 kubelet[2906]: E0209 19:01:11.589012 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.589024 kubelet[2906]: W0209 19:01:11.589020 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.589450 kubelet[2906]: E0209 19:01:11.589096 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.589450 kubelet[2906]: E0209 19:01:11.589374 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.589450 kubelet[2906]: W0209 19:01:11.589385 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.589649 kubelet[2906]: E0209 19:01:11.589472 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.589697 kubelet[2906]: E0209 19:01:11.589690 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.589740 kubelet[2906]: W0209 19:01:11.589700 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.589740 kubelet[2906]: E0209 19:01:11.589719 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.590583 kubelet[2906]: E0209 19:01:11.589962 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.590583 kubelet[2906]: W0209 19:01:11.589974 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.590583 kubelet[2906]: E0209 19:01:11.589991 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.590583 kubelet[2906]: E0209 19:01:11.590192 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.590583 kubelet[2906]: W0209 19:01:11.590201 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.590583 kubelet[2906]: E0209 19:01:11.590220 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.590583 kubelet[2906]: E0209 19:01:11.590474 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.590583 kubelet[2906]: W0209 19:01:11.590483 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.591011 kubelet[2906]: E0209 19:01:11.590591 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.591011 kubelet[2906]: E0209 19:01:11.590727 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.591011 kubelet[2906]: W0209 19:01:11.590736 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.591011 kubelet[2906]: E0209 19:01:11.590820 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.591011 kubelet[2906]: E0209 19:01:11.590948 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.591011 kubelet[2906]: W0209 19:01:11.590956 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.591011 kubelet[2906]: E0209 19:01:11.590972 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.591413 kubelet[2906]: E0209 19:01:11.591165 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.591413 kubelet[2906]: W0209 19:01:11.591174 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.591413 kubelet[2906]: E0209 19:01:11.591191 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.593968 kubelet[2906]: E0209 19:01:11.591585 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.593968 kubelet[2906]: W0209 19:01:11.591594 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.593968 kubelet[2906]: E0209 19:01:11.591610 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.593968 kubelet[2906]: E0209 19:01:11.592413 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.593968 kubelet[2906]: W0209 19:01:11.592426 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.593968 kubelet[2906]: E0209 19:01:11.592446 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.596554 kubelet[2906]: E0209 19:01:11.595645 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.596554 kubelet[2906]: W0209 19:01:11.595664 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.596554 kubelet[2906]: E0209 19:01:11.595812 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.596554 kubelet[2906]: E0209 19:01:11.596133 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.596554 kubelet[2906]: W0209 19:01:11.596144 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.596554 kubelet[2906]: E0209 19:01:11.596161 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.659304 env[1709]: time="2024-02-09T19:01:11.659245440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c6cc75fd-sjc7l,Uid:bc2d0795-a4fa-4f48-a139-6fdbafdd0f50,Namespace:calico-system,Attempt:0,} returns sandbox id \"c63943cf1a0f8e23b9e8c4144b83b26c0e9862cf8ad8a5c1dfce962a51d2b50a\"" Feb 9 19:01:11.661307 env[1709]: time="2024-02-09T19:01:11.661270517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 9 19:01:11.690119 kubelet[2906]: E0209 19:01:11.690091 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.690119 kubelet[2906]: W0209 19:01:11.690117 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.690119 kubelet[2906]: E0209 19:01:11.690143 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.690452 kubelet[2906]: E0209 19:01:11.690380 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.690452 kubelet[2906]: W0209 19:01:11.690392 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.690452 kubelet[2906]: E0209 19:01:11.690412 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.730672 kubelet[2906]: E0209 19:01:11.730506 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.730672 kubelet[2906]: W0209 19:01:11.730620 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.730672 kubelet[2906]: E0209 19:01:11.730757 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.791590 kubelet[2906]: E0209 19:01:11.791469 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.791590 kubelet[2906]: W0209 19:01:11.791498 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.791590 kubelet[2906]: E0209 19:01:11.791537 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.878119 env[1709]: time="2024-02-09T19:01:11.877955764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dcqp7,Uid:61fe811e-87e0-42cd-9431-581a3031da51,Namespace:calico-system,Attempt:0,}" Feb 9 19:01:11.893153 kubelet[2906]: E0209 19:01:11.893126 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.893342 kubelet[2906]: W0209 19:01:11.893324 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.893472 kubelet[2906]: E0209 19:01:11.893461 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:11.920403 env[1709]: time="2024-02-09T19:01:11.919770185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:01:11.920403 env[1709]: time="2024-02-09T19:01:11.919911111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:01:11.920403 env[1709]: time="2024-02-09T19:01:11.919998805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:01:11.920403 env[1709]: time="2024-02-09T19:01:11.920259898Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7b7644e5cfcf761970b4bdb22f8934e5d1e6d1839a06564b5ce6974fc00eecb pid=3443 runtime=io.containerd.runc.v2 Feb 9 19:01:11.939462 kubelet[2906]: E0209 19:01:11.939434 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:11.939462 kubelet[2906]: W0209 19:01:11.939459 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:11.939825 kubelet[2906]: E0209 19:01:11.939483 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:12.030912 env[1709]: time="2024-02-09T19:01:12.030859917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dcqp7,Uid:61fe811e-87e0-42cd-9431-581a3031da51,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7b7644e5cfcf761970b4bdb22f8934e5d1e6d1839a06564b5ce6974fc00eecb\"" Feb 9 19:01:12.152899 kernel: kauditd_printk_skb: 140 callbacks suppressed Feb 9 19:01:12.153046 kernel: audit: type=1325 audit(1707505272.145:283): table=filter:109 family=2 entries=14 op=nft_register_rule pid=3504 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:12.145000 audit[3504]: NETFILTER_CFG table=filter:109 family=2 entries=14 op=nft_register_rule pid=3504 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:12.145000 audit[3504]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffe6cd051f0 a2=0 a3=7ffe6cd051dc items=0 ppid=3099 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:12.145000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:12.164464 kernel: audit: type=1300 audit(1707505272.145:283): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffe6cd051f0 a2=0 a3=7ffe6cd051dc items=0 ppid=3099 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:12.164562 kernel: audit: type=1327 audit(1707505272.145:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:12.158000 audit[3504]: NETFILTER_CFG table=nat:110 family=2 entries=20 op=nft_register_rule pid=3504 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:12.158000 audit[3504]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe6cd051f0 a2=0 a3=7ffe6cd051dc items=0 ppid=3099 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:12.177912 kernel: audit: type=1325 audit(1707505272.158:284): table=nat:110 family=2 entries=20 op=nft_register_rule pid=3504 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:12.178049 kernel: audit: type=1300 audit(1707505272.158:284): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe6cd051f0 a2=0 a3=7ffe6cd051dc items=0 ppid=3099 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:12.178088 kernel: audit: type=1327 audit(1707505272.158:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:12.158000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:12.997945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2199752520.mount: Deactivated successfully. Feb 9 19:01:13.415808 kubelet[2906]: E0209 19:01:13.415776 2906 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sfvlp" podUID=196d1d2d-b701-4a86-946e-fecee4636cf4 Feb 9 19:01:15.359601 env[1709]: time="2024-02-09T19:01:15.358661066Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:15.363544 env[1709]: time="2024-02-09T19:01:15.362707667Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:15.373161 env[1709]: time="2024-02-09T19:01:15.373115220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:15.375158 env[1709]: time="2024-02-09T19:01:15.374842407Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:15.377083 env[1709]: time="2024-02-09T19:01:15.377036726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c\"" Feb 9 19:01:15.392808 env[1709]: time="2024-02-09T19:01:15.392769790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 19:01:15.403176 env[1709]: time="2024-02-09T19:01:15.403012183Z" level=info msg="CreateContainer within sandbox \"c63943cf1a0f8e23b9e8c4144b83b26c0e9862cf8ad8a5c1dfce962a51d2b50a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 19:01:15.417946 kubelet[2906]: E0209 19:01:15.417908 2906 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sfvlp" podUID=196d1d2d-b701-4a86-946e-fecee4636cf4 Feb 9 19:01:15.422938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196890985.mount: Deactivated successfully. Feb 9 19:01:15.428857 env[1709]: time="2024-02-09T19:01:15.428802816Z" level=info msg="CreateContainer within sandbox \"c63943cf1a0f8e23b9e8c4144b83b26c0e9862cf8ad8a5c1dfce962a51d2b50a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2a91dc3713bce1c6b9c1efdef3cf08daaa7cf61df7e78d3cadc93ce8e1809194\"" Feb 9 19:01:15.429497 env[1709]: time="2024-02-09T19:01:15.429466203Z" level=info msg="StartContainer for \"2a91dc3713bce1c6b9c1efdef3cf08daaa7cf61df7e78d3cadc93ce8e1809194\"" Feb 9 19:01:15.556748 env[1709]: time="2024-02-09T19:01:15.556690047Z" level=info msg="StartContainer for \"2a91dc3713bce1c6b9c1efdef3cf08daaa7cf61df7e78d3cadc93ce8e1809194\" returns successfully" Feb 9 19:01:16.589558 kubelet[2906]: I0209 19:01:16.589532 2906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-58c6cc75fd-sjc7l" podStartSLOduration=-9.22337203126532e+09 pod.CreationTimestamp="2024-02-09 19:01:11 +0000 UTC" firstStartedPulling="2024-02-09 19:01:11.660693248 +0000 UTC m=+19.799641089" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:01:16.589127401 +0000 UTC m=+24.728075279" watchObservedRunningTime="2024-02-09 19:01:16.58945659 +0000 UTC m=+24.728404451" Feb 9 19:01:16.648592 kubelet[2906]: E0209 19:01:16.648558 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.648592 kubelet[2906]: W0209 19:01:16.648585 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.648854 kubelet[2906]: E0209 19:01:16.648608 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.648854 kubelet[2906]: E0209 19:01:16.648842 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.648854 kubelet[2906]: W0209 19:01:16.648852 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.649128 kubelet[2906]: E0209 19:01:16.648870 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.649204 kubelet[2906]: E0209 19:01:16.649170 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.649204 kubelet[2906]: W0209 19:01:16.649182 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.649204 kubelet[2906]: E0209 19:01:16.649200 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.649541 kubelet[2906]: E0209 19:01:16.649520 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.649541 kubelet[2906]: W0209 19:01:16.649537 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.649987 kubelet[2906]: E0209 19:01:16.649556 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.650291 kubelet[2906]: E0209 19:01:16.650274 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.650291 kubelet[2906]: W0209 19:01:16.650288 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.650418 kubelet[2906]: E0209 19:01:16.650308 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.650715 kubelet[2906]: E0209 19:01:16.650698 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.650715 kubelet[2906]: W0209 19:01:16.650711 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.650851 kubelet[2906]: E0209 19:01:16.650727 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.651217 kubelet[2906]: E0209 19:01:16.651161 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.651217 kubelet[2906]: W0209 19:01:16.651175 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.651332 kubelet[2906]: E0209 19:01:16.651233 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.651625 kubelet[2906]: E0209 19:01:16.651610 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.651702 kubelet[2906]: W0209 19:01:16.651626 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.651702 kubelet[2906]: E0209 19:01:16.651641 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.652078 kubelet[2906]: E0209 19:01:16.652062 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.652078 kubelet[2906]: W0209 19:01:16.652075 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.652346 kubelet[2906]: E0209 19:01:16.652091 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.652564 kubelet[2906]: E0209 19:01:16.652547 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.652564 kubelet[2906]: W0209 19:01:16.652560 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.652691 kubelet[2906]: E0209 19:01:16.652577 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.652985 kubelet[2906]: E0209 19:01:16.652966 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.652985 kubelet[2906]: W0209 19:01:16.652981 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.653400 kubelet[2906]: E0209 19:01:16.652997 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.653654 kubelet[2906]: E0209 19:01:16.653637 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.653654 kubelet[2906]: W0209 19:01:16.653651 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.653770 kubelet[2906]: E0209 19:01:16.653670 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.751150 kubelet[2906]: E0209 19:01:16.751062 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.751150 kubelet[2906]: W0209 19:01:16.751147 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.751477 kubelet[2906]: E0209 19:01:16.751236 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.752581 kubelet[2906]: E0209 19:01:16.751795 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.752581 kubelet[2906]: W0209 19:01:16.751813 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.752581 kubelet[2906]: E0209 19:01:16.751839 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.752581 kubelet[2906]: E0209 19:01:16.752060 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.752581 kubelet[2906]: W0209 19:01:16.752071 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.752581 kubelet[2906]: E0209 19:01:16.752095 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.753121 kubelet[2906]: E0209 19:01:16.752623 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.753121 kubelet[2906]: W0209 19:01:16.752635 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.753121 kubelet[2906]: E0209 19:01:16.752657 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.753121 kubelet[2906]: E0209 19:01:16.753102 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.753806 kubelet[2906]: W0209 19:01:16.753141 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.753806 kubelet[2906]: E0209 19:01:16.753262 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.753906 kubelet[2906]: E0209 19:01:16.753829 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.753906 kubelet[2906]: W0209 19:01:16.753840 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.753906 kubelet[2906]: E0209 19:01:16.753861 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.754370 kubelet[2906]: E0209 19:01:16.754327 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.754370 kubelet[2906]: W0209 19:01:16.754347 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.754883 kubelet[2906]: E0209 19:01:16.754414 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.754883 kubelet[2906]: E0209 19:01:16.754893 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.755074 kubelet[2906]: W0209 19:01:16.754904 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.755074 kubelet[2906]: E0209 19:01:16.754927 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.755290 kubelet[2906]: E0209 19:01:16.755255 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.755290 kubelet[2906]: W0209 19:01:16.755265 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.755290 kubelet[2906]: E0209 19:01:16.755286 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.755922 kubelet[2906]: E0209 19:01:16.755830 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.755922 kubelet[2906]: W0209 19:01:16.755846 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.755922 kubelet[2906]: E0209 19:01:16.755876 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.756362 kubelet[2906]: E0209 19:01:16.756133 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.756362 kubelet[2906]: W0209 19:01:16.756144 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.756362 kubelet[2906]: E0209 19:01:16.756305 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.757217 kubelet[2906]: E0209 19:01:16.756871 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.757217 kubelet[2906]: W0209 19:01:16.756887 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.757217 kubelet[2906]: E0209 19:01:16.756908 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.757217 kubelet[2906]: E0209 19:01:16.757170 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.757217 kubelet[2906]: W0209 19:01:16.757180 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.757217 kubelet[2906]: E0209 19:01:16.757200 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.757217 kubelet[2906]: E0209 19:01:16.757678 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.757217 kubelet[2906]: W0209 19:01:16.757802 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.757217 kubelet[2906]: E0209 19:01:16.757905 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.757217 kubelet[2906]: E0209 19:01:16.758119 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.763471 kubelet[2906]: W0209 19:01:16.758129 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.763471 kubelet[2906]: E0209 19:01:16.758147 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.763471 kubelet[2906]: E0209 19:01:16.758599 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.763471 kubelet[2906]: W0209 19:01:16.758610 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.763471 kubelet[2906]: E0209 19:01:16.758630 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.763471 kubelet[2906]: E0209 19:01:16.758844 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.763471 kubelet[2906]: W0209 19:01:16.758854 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.763471 kubelet[2906]: E0209 19:01:16.758868 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.763471 kubelet[2906]: E0209 19:01:16.759463 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:16.763471 kubelet[2906]: W0209 19:01:16.759474 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:16.764320 kubelet[2906]: E0209 19:01:16.759489 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:16.794304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount478985417.mount: Deactivated successfully. Feb 9 19:01:17.415944 kubelet[2906]: E0209 19:01:17.415528 2906 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sfvlp" podUID=196d1d2d-b701-4a86-946e-fecee4636cf4 Feb 9 19:01:17.562899 kubelet[2906]: I0209 19:01:17.562869 2906 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 19:01:17.662092 kubelet[2906]: E0209 19:01:17.662064 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.663156 kubelet[2906]: W0209 19:01:17.663090 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.663718 kubelet[2906]: E0209 19:01:17.663650 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.664200 kubelet[2906]: E0209 19:01:17.664181 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.664200 kubelet[2906]: W0209 19:01:17.664197 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.664324 kubelet[2906]: E0209 19:01:17.664217 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.664629 kubelet[2906]: E0209 19:01:17.664608 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.664710 kubelet[2906]: W0209 19:01:17.664624 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.664710 kubelet[2906]: E0209 19:01:17.664661 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.664940 kubelet[2906]: E0209 19:01:17.664923 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.664940 kubelet[2906]: W0209 19:01:17.664936 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.665052 kubelet[2906]: E0209 19:01:17.664953 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.665292 kubelet[2906]: E0209 19:01:17.665274 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.665292 kubelet[2906]: W0209 19:01:17.665288 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.665416 kubelet[2906]: E0209 19:01:17.665305 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.665582 kubelet[2906]: E0209 19:01:17.665570 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.665657 kubelet[2906]: W0209 19:01:17.665637 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.665727 kubelet[2906]: E0209 19:01:17.665660 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.665941 kubelet[2906]: E0209 19:01:17.665926 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.665941 kubelet[2906]: W0209 19:01:17.665940 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.667433 kubelet[2906]: E0209 19:01:17.665956 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.667433 kubelet[2906]: E0209 19:01:17.666148 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.667433 kubelet[2906]: W0209 19:01:17.666158 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.667433 kubelet[2906]: E0209 19:01:17.666172 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.667433 kubelet[2906]: E0209 19:01:17.666384 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.667433 kubelet[2906]: W0209 19:01:17.666394 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.667433 kubelet[2906]: E0209 19:01:17.666409 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.667433 kubelet[2906]: E0209 19:01:17.666674 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.667433 kubelet[2906]: W0209 19:01:17.666683 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.667433 kubelet[2906]: E0209 19:01:17.666698 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.668156 kubelet[2906]: E0209 19:01:17.666884 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.668156 kubelet[2906]: W0209 19:01:17.666893 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.668156 kubelet[2906]: E0209 19:01:17.666907 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.668156 kubelet[2906]: E0209 19:01:17.667078 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.668156 kubelet[2906]: W0209 19:01:17.667087 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.668156 kubelet[2906]: E0209 19:01:17.667108 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.668156 kubelet[2906]: E0209 19:01:17.667565 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.668156 kubelet[2906]: W0209 19:01:17.667577 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.668156 kubelet[2906]: E0209 19:01:17.667594 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.668156 kubelet[2906]: E0209 19:01:17.667894 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.668776 kubelet[2906]: W0209 19:01:17.667905 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.668776 kubelet[2906]: E0209 19:01:17.667961 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.668776 kubelet[2906]: E0209 19:01:17.668173 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.668776 kubelet[2906]: W0209 19:01:17.668182 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.668776 kubelet[2906]: E0209 19:01:17.668197 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.668776 kubelet[2906]: E0209 19:01:17.668481 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.668776 kubelet[2906]: W0209 19:01:17.668491 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.668776 kubelet[2906]: E0209 19:01:17.668506 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.668776 kubelet[2906]: E0209 19:01:17.668717 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.668776 kubelet[2906]: W0209 19:01:17.668726 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.669247 kubelet[2906]: E0209 19:01:17.668739 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.669247 kubelet[2906]: E0209 19:01:17.668897 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.669247 kubelet[2906]: W0209 19:01:17.668905 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.669247 kubelet[2906]: E0209 19:01:17.668918 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.669247 kubelet[2906]: E0209 19:01:17.669106 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.669247 kubelet[2906]: W0209 19:01:17.669114 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.669247 kubelet[2906]: E0209 19:01:17.669127 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.669854 kubelet[2906]: E0209 19:01:17.669836 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.669854 kubelet[2906]: W0209 19:01:17.669850 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.669966 kubelet[2906]: E0209 19:01:17.669872 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.670199 kubelet[2906]: E0209 19:01:17.670183 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.670199 kubelet[2906]: W0209 19:01:17.670198 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.670349 kubelet[2906]: E0209 19:01:17.670218 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.670445 kubelet[2906]: E0209 19:01:17.670426 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.670445 kubelet[2906]: W0209 19:01:17.670440 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.670649 kubelet[2906]: E0209 19:01:17.670466 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.670699 kubelet[2906]: E0209 19:01:17.670693 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.670758 kubelet[2906]: W0209 19:01:17.670702 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.670758 kubelet[2906]: E0209 19:01:17.670746 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.671046 kubelet[2906]: E0209 19:01:17.671029 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.671046 kubelet[2906]: W0209 19:01:17.671043 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.671157 kubelet[2906]: E0209 19:01:17.671137 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.671855 kubelet[2906]: E0209 19:01:17.671811 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.672173 kubelet[2906]: W0209 19:01:17.672150 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.672244 kubelet[2906]: E0209 19:01:17.672181 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.672484 kubelet[2906]: E0209 19:01:17.672468 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.672484 kubelet[2906]: W0209 19:01:17.672481 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.672661 kubelet[2906]: E0209 19:01:17.672502 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.672863 kubelet[2906]: E0209 19:01:17.672849 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.672958 kubelet[2906]: W0209 19:01:17.672906 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.673010 kubelet[2906]: E0209 19:01:17.672991 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.673202 kubelet[2906]: E0209 19:01:17.673189 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.673395 kubelet[2906]: W0209 19:01:17.673374 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.673459 kubelet[2906]: E0209 19:01:17.673408 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.673669 kubelet[2906]: E0209 19:01:17.673649 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.673669 kubelet[2906]: W0209 19:01:17.673667 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.673789 kubelet[2906]: E0209 19:01:17.673685 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.674761 kubelet[2906]: E0209 19:01:17.674749 2906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:01:17.674855 kubelet[2906]: W0209 19:01:17.674844 2906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:01:17.674939 kubelet[2906]: E0209 19:01:17.674930 2906 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:01:17.826139 env[1709]: time="2024-02-09T19:01:17.826091586Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:17.829482 env[1709]: time="2024-02-09T19:01:17.829437163Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:17.832467 env[1709]: time="2024-02-09T19:01:17.832275956Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:17.835590 env[1709]: time="2024-02-09T19:01:17.835554096Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:17.837407 env[1709]: time="2024-02-09T19:01:17.837367823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 9 19:01:17.841751 env[1709]: time="2024-02-09T19:01:17.841707582Z" level=info msg="CreateContainer within sandbox \"e7b7644e5cfcf761970b4bdb22f8934e5d1e6d1839a06564b5ce6974fc00eecb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 19:01:17.862078 env[1709]: time="2024-02-09T19:01:17.862023684Z" level=info msg="CreateContainer within sandbox \"e7b7644e5cfcf761970b4bdb22f8934e5d1e6d1839a06564b5ce6974fc00eecb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"814eae259b333dd3f2b3a251d2f78967771277dd650d27699414001af60bec1d\"" Feb 9 19:01:17.862738 env[1709]: time="2024-02-09T19:01:17.862700842Z" level=info msg="StartContainer for \"814eae259b333dd3f2b3a251d2f78967771277dd650d27699414001af60bec1d\"" Feb 9 19:01:17.915715 systemd[1]: run-containerd-runc-k8s.io-814eae259b333dd3f2b3a251d2f78967771277dd650d27699414001af60bec1d-runc.9axvcj.mount: Deactivated successfully. Feb 9 19:01:17.998952 env[1709]: time="2024-02-09T19:01:17.991187309Z" level=info msg="StartContainer for \"814eae259b333dd3f2b3a251d2f78967771277dd650d27699414001af60bec1d\" returns successfully" Feb 9 19:01:18.282450 env[1709]: time="2024-02-09T19:01:18.282387215Z" level=info msg="shim disconnected" id=814eae259b333dd3f2b3a251d2f78967771277dd650d27699414001af60bec1d Feb 9 19:01:18.282450 env[1709]: time="2024-02-09T19:01:18.282459332Z" level=warning msg="cleaning up after shim disconnected" id=814eae259b333dd3f2b3a251d2f78967771277dd650d27699414001af60bec1d namespace=k8s.io Feb 9 19:01:18.282450 env[1709]: time="2024-02-09T19:01:18.282475141Z" level=info msg="cleaning up dead shim" Feb 9 19:01:18.310023 env[1709]: time="2024-02-09T19:01:18.309975259Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:01:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3662 runtime=io.containerd.runc.v2\n" Feb 9 19:01:18.573815 env[1709]: time="2024-02-09T19:01:18.572850360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 19:01:18.857484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-814eae259b333dd3f2b3a251d2f78967771277dd650d27699414001af60bec1d-rootfs.mount: Deactivated successfully. Feb 9 19:01:19.415111 kubelet[2906]: E0209 19:01:19.415073 2906 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sfvlp" podUID=196d1d2d-b701-4a86-946e-fecee4636cf4 Feb 9 19:01:19.913059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3676539545.mount: Deactivated successfully. Feb 9 19:01:21.415771 kubelet[2906]: E0209 19:01:21.415710 2906 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sfvlp" podUID=196d1d2d-b701-4a86-946e-fecee4636cf4 Feb 9 19:01:23.416160 kubelet[2906]: E0209 19:01:23.416120 2906 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sfvlp" podUID=196d1d2d-b701-4a86-946e-fecee4636cf4 Feb 9 19:01:25.403249 env[1709]: time="2024-02-09T19:01:25.403195702Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:25.406049 env[1709]: time="2024-02-09T19:01:25.406004288Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:25.408808 env[1709]: time="2024-02-09T19:01:25.408762119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:25.411291 env[1709]: time="2024-02-09T19:01:25.411251711Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:25.412338 env[1709]: time="2024-02-09T19:01:25.412217089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 9 19:01:25.415115 kubelet[2906]: E0209 19:01:25.415075 2906 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sfvlp" podUID=196d1d2d-b701-4a86-946e-fecee4636cf4 Feb 9 19:01:25.418336 env[1709]: time="2024-02-09T19:01:25.417333408Z" level=info msg="CreateContainer within sandbox \"e7b7644e5cfcf761970b4bdb22f8934e5d1e6d1839a06564b5ce6974fc00eecb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 19:01:25.491330 env[1709]: time="2024-02-09T19:01:25.491268725Z" level=info msg="CreateContainer within sandbox \"e7b7644e5cfcf761970b4bdb22f8934e5d1e6d1839a06564b5ce6974fc00eecb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"786254b78ec457d2fbecea285e9e79dd2140696dc43620a72517ad3e11c433bd\"" Feb 9 19:01:25.492076 env[1709]: time="2024-02-09T19:01:25.492029997Z" level=info msg="StartContainer for \"786254b78ec457d2fbecea285e9e79dd2140696dc43620a72517ad3e11c433bd\"" Feb 9 19:01:25.589115 env[1709]: time="2024-02-09T19:01:25.589032742Z" level=info msg="StartContainer for \"786254b78ec457d2fbecea285e9e79dd2140696dc43620a72517ad3e11c433bd\" returns successfully" Feb 9 19:01:26.933136 env[1709]: time="2024-02-09T19:01:26.933062449Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:01:26.986945 kubelet[2906]: I0209 19:01:26.986915 2906 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:01:27.000089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-786254b78ec457d2fbecea285e9e79dd2140696dc43620a72517ad3e11c433bd-rootfs.mount: Deactivated successfully. Feb 9 19:01:27.019540 env[1709]: time="2024-02-09T19:01:27.019235997Z" level=info msg="shim disconnected" id=786254b78ec457d2fbecea285e9e79dd2140696dc43620a72517ad3e11c433bd Feb 9 19:01:27.019540 env[1709]: time="2024-02-09T19:01:27.019306503Z" level=warning msg="cleaning up after shim disconnected" id=786254b78ec457d2fbecea285e9e79dd2140696dc43620a72517ad3e11c433bd namespace=k8s.io Feb 9 19:01:27.019540 env[1709]: time="2024-02-09T19:01:27.019320359Z" level=info msg="cleaning up dead shim" Feb 9 19:01:27.027259 kubelet[2906]: I0209 19:01:27.026272 2906 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:01:27.029701 kubelet[2906]: I0209 19:01:27.029369 2906 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:01:27.044881 kubelet[2906]: I0209 19:01:27.044845 2906 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:01:27.068505 env[1709]: time="2024-02-09T19:01:27.068443400Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:01:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3737 runtime=io.containerd.runc.v2\n" Feb 9 19:01:27.174410 kubelet[2906]: I0209 19:01:27.174351 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/607248e1-9d7a-4a9e-9970-28f4dcfc35fc-tigera-ca-bundle\") pod \"calico-kube-controllers-78449db666-g6hg6\" (UID: \"607248e1-9d7a-4a9e-9970-28f4dcfc35fc\") " pod="calico-system/calico-kube-controllers-78449db666-g6hg6" Feb 9 19:01:27.174677 kubelet[2906]: I0209 19:01:27.174523 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a-config-volume\") pod \"coredns-787d4945fb-ttftx\" (UID: \"91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a\") " pod="kube-system/coredns-787d4945fb-ttftx" Feb 9 19:01:27.174677 kubelet[2906]: I0209 19:01:27.174610 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fa17834-5f19-4801-a2cd-cf352498e924-config-volume\") pod \"coredns-787d4945fb-ztsmf\" (UID: \"2fa17834-5f19-4801-a2cd-cf352498e924\") " pod="kube-system/coredns-787d4945fb-ztsmf" Feb 9 19:01:27.174677 kubelet[2906]: I0209 19:01:27.174647 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdmfz\" (UniqueName: \"kubernetes.io/projected/91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a-kube-api-access-wdmfz\") pod \"coredns-787d4945fb-ttftx\" (UID: \"91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a\") " pod="kube-system/coredns-787d4945fb-ttftx" Feb 9 19:01:27.174827 kubelet[2906]: I0209 19:01:27.174684 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d972b\" (UniqueName: \"kubernetes.io/projected/607248e1-9d7a-4a9e-9970-28f4dcfc35fc-kube-api-access-d972b\") pod \"calico-kube-controllers-78449db666-g6hg6\" (UID: \"607248e1-9d7a-4a9e-9970-28f4dcfc35fc\") " pod="calico-system/calico-kube-controllers-78449db666-g6hg6" Feb 9 19:01:27.174827 kubelet[2906]: I0209 19:01:27.174717 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5tp9\" (UniqueName: \"kubernetes.io/projected/2fa17834-5f19-4801-a2cd-cf352498e924-kube-api-access-w5tp9\") pod \"coredns-787d4945fb-ztsmf\" (UID: \"2fa17834-5f19-4801-a2cd-cf352498e924\") " pod="kube-system/coredns-787d4945fb-ztsmf" Feb 9 19:01:27.349798 env[1709]: time="2024-02-09T19:01:27.349755688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ztsmf,Uid:2fa17834-5f19-4801-a2cd-cf352498e924,Namespace:kube-system,Attempt:0,}" Feb 9 19:01:27.364050 env[1709]: time="2024-02-09T19:01:27.364006410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ttftx,Uid:91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a,Namespace:kube-system,Attempt:0,}" Feb 9 19:01:27.364429 env[1709]: time="2024-02-09T19:01:27.364387668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78449db666-g6hg6,Uid:607248e1-9d7a-4a9e-9970-28f4dcfc35fc,Namespace:calico-system,Attempt:0,}" Feb 9 19:01:27.423121 env[1709]: time="2024-02-09T19:01:27.423062357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sfvlp,Uid:196d1d2d-b701-4a86-946e-fecee4636cf4,Namespace:calico-system,Attempt:0,}" Feb 9 19:01:27.613245 env[1709]: time="2024-02-09T19:01:27.611479135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 19:01:27.656212 env[1709]: time="2024-02-09T19:01:27.655970425Z" level=error msg="Failed to destroy network for sandbox \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.656692 env[1709]: time="2024-02-09T19:01:27.656549544Z" level=error msg="encountered an error cleaning up failed sandbox \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.656938 env[1709]: time="2024-02-09T19:01:27.656712155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ztsmf,Uid:2fa17834-5f19-4801-a2cd-cf352498e924,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.658266 kubelet[2906]: E0209 19:01:27.657958 2906 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.659725 kubelet[2906]: E0209 19:01:27.658651 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-ztsmf" Feb 9 19:01:27.659725 kubelet[2906]: E0209 19:01:27.658850 2906 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-ztsmf" Feb 9 19:01:27.659725 kubelet[2906]: E0209 19:01:27.659691 2906 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-ztsmf_kube-system(2fa17834-5f19-4801-a2cd-cf352498e924)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-ztsmf_kube-system(2fa17834-5f19-4801-a2cd-cf352498e924)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-ztsmf" podUID=2fa17834-5f19-4801-a2cd-cf352498e924 Feb 9 19:01:27.716538 env[1709]: time="2024-02-09T19:01:27.716456284Z" level=error msg="Failed to destroy network for sandbox \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.716917 env[1709]: time="2024-02-09T19:01:27.716872889Z" level=error msg="encountered an error cleaning up failed sandbox \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.717007 env[1709]: time="2024-02-09T19:01:27.716946416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sfvlp,Uid:196d1d2d-b701-4a86-946e-fecee4636cf4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.717549 kubelet[2906]: E0209 19:01:27.717227 2906 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.717549 kubelet[2906]: E0209 19:01:27.717291 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sfvlp" Feb 9 19:01:27.717549 kubelet[2906]: E0209 19:01:27.717324 2906 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sfvlp" Feb 9 19:01:27.717764 kubelet[2906]: E0209 19:01:27.717488 2906 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sfvlp_calico-system(196d1d2d-b701-4a86-946e-fecee4636cf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sfvlp_calico-system(196d1d2d-b701-4a86-946e-fecee4636cf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sfvlp" podUID=196d1d2d-b701-4a86-946e-fecee4636cf4 Feb 9 19:01:27.728149 env[1709]: time="2024-02-09T19:01:27.727965955Z" level=error msg="Failed to destroy network for sandbox \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.728533 env[1709]: time="2024-02-09T19:01:27.728464153Z" level=error msg="encountered an error cleaning up failed sandbox \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.728639 env[1709]: time="2024-02-09T19:01:27.728544290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78449db666-g6hg6,Uid:607248e1-9d7a-4a9e-9970-28f4dcfc35fc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.728804 kubelet[2906]: E0209 19:01:27.728777 2906 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.728896 kubelet[2906]: E0209 19:01:27.728842 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78449db666-g6hg6" Feb 9 19:01:27.728896 kubelet[2906]: E0209 19:01:27.728874 2906 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78449db666-g6hg6" Feb 9 19:01:27.729004 kubelet[2906]: E0209 19:01:27.728945 2906 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78449db666-g6hg6_calico-system(607248e1-9d7a-4a9e-9970-28f4dcfc35fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78449db666-g6hg6_calico-system(607248e1-9d7a-4a9e-9970-28f4dcfc35fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78449db666-g6hg6" podUID=607248e1-9d7a-4a9e-9970-28f4dcfc35fc Feb 9 19:01:27.767006 env[1709]: time="2024-02-09T19:01:27.765409302Z" level=error msg="Failed to destroy network for sandbox \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.768043 env[1709]: time="2024-02-09T19:01:27.767985929Z" level=error msg="encountered an error cleaning up failed sandbox \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.768164 env[1709]: time="2024-02-09T19:01:27.768067345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ttftx,Uid:91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.769757 kubelet[2906]: E0209 19:01:27.769720 2906 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:27.769880 kubelet[2906]: E0209 19:01:27.769805 2906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-ttftx" Feb 9 19:01:27.769880 kubelet[2906]: E0209 19:01:27.769852 2906 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-ttftx" Feb 9 19:01:27.769983 kubelet[2906]: E0209 19:01:27.769926 2906 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-ttftx_kube-system(91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-ttftx_kube-system(91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-ttftx" podUID=91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a Feb 9 19:01:28.003786 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5-shm.mount: Deactivated successfully. Feb 9 19:01:28.610255 kubelet[2906]: I0209 19:01:28.610223 2906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Feb 9 19:01:28.612809 env[1709]: time="2024-02-09T19:01:28.612767879Z" level=info msg="StopPodSandbox for \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\"" Feb 9 19:01:28.615950 kubelet[2906]: I0209 19:01:28.615918 2906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Feb 9 19:01:28.619356 env[1709]: time="2024-02-09T19:01:28.619312380Z" level=info msg="StopPodSandbox for \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\"" Feb 9 19:01:28.621073 kubelet[2906]: I0209 19:01:28.621042 2906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Feb 9 19:01:28.633435 env[1709]: time="2024-02-09T19:01:28.633322112Z" level=info msg="StopPodSandbox for \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\"" Feb 9 19:01:28.636547 kubelet[2906]: I0209 19:01:28.636519 2906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Feb 9 19:01:28.645606 env[1709]: time="2024-02-09T19:01:28.645507408Z" level=info msg="StopPodSandbox for \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\"" Feb 9 19:01:28.723344 env[1709]: time="2024-02-09T19:01:28.723277164Z" level=error msg="StopPodSandbox for \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\" failed" error="failed to destroy network for sandbox \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:28.724055 kubelet[2906]: E0209 19:01:28.723755 2906 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Feb 9 19:01:28.724055 kubelet[2906]: E0209 19:01:28.723837 2906 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a} Feb 9 19:01:28.724055 kubelet[2906]: E0209 19:01:28.723966 2906 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"196d1d2d-b701-4a86-946e-fecee4636cf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:01:28.724055 kubelet[2906]: E0209 19:01:28.724013 2906 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"196d1d2d-b701-4a86-946e-fecee4636cf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sfvlp" podUID=196d1d2d-b701-4a86-946e-fecee4636cf4 Feb 9 19:01:28.759969 env[1709]: time="2024-02-09T19:01:28.759899036Z" level=error msg="StopPodSandbox for \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\" failed" error="failed to destroy network for sandbox \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:28.760205 kubelet[2906]: E0209 19:01:28.760180 2906 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Feb 9 19:01:28.760323 kubelet[2906]: E0209 19:01:28.760228 2906 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649} Feb 9 19:01:28.760323 kubelet[2906]: E0209 19:01:28.760281 2906 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"607248e1-9d7a-4a9e-9970-28f4dcfc35fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:01:28.760323 kubelet[2906]: E0209 19:01:28.760319 2906 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"607248e1-9d7a-4a9e-9970-28f4dcfc35fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78449db666-g6hg6" podUID=607248e1-9d7a-4a9e-9970-28f4dcfc35fc Feb 9 19:01:28.763068 env[1709]: time="2024-02-09T19:01:28.763001220Z" level=error msg="StopPodSandbox for \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\" failed" error="failed to destroy network for sandbox \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:28.763277 kubelet[2906]: E0209 19:01:28.763253 2906 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Feb 9 19:01:28.763387 kubelet[2906]: E0209 19:01:28.763299 2906 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608} Feb 9 19:01:28.763387 kubelet[2906]: E0209 19:01:28.763352 2906 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:01:28.763581 kubelet[2906]: E0209 19:01:28.763394 2906 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-ttftx" podUID=91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a Feb 9 19:01:28.779185 env[1709]: time="2024-02-09T19:01:28.779046199Z" level=error msg="StopPodSandbox for \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\" failed" error="failed to destroy network for sandbox \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:01:28.780305 kubelet[2906]: E0209 19:01:28.780254 2906 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Feb 9 19:01:28.780495 kubelet[2906]: E0209 19:01:28.780304 2906 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5} Feb 9 19:01:28.780495 kubelet[2906]: E0209 19:01:28.780445 2906 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2fa17834-5f19-4801-a2cd-cf352498e924\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:01:28.780495 kubelet[2906]: E0209 19:01:28.780493 2906 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2fa17834-5f19-4801-a2cd-cf352498e924\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-ztsmf" podUID=2fa17834-5f19-4801-a2cd-cf352498e924 Feb 9 19:01:35.624376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474138881.mount: Deactivated successfully. Feb 9 19:01:35.697432 env[1709]: time="2024-02-09T19:01:35.697321629Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:35.700461 env[1709]: time="2024-02-09T19:01:35.700418011Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:35.702620 env[1709]: time="2024-02-09T19:01:35.702582713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:35.705265 env[1709]: time="2024-02-09T19:01:35.705225376Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:35.705676 env[1709]: time="2024-02-09T19:01:35.705643167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 9 19:01:35.728064 env[1709]: time="2024-02-09T19:01:35.728021350Z" level=info msg="CreateContainer within sandbox \"e7b7644e5cfcf761970b4bdb22f8934e5d1e6d1839a06564b5ce6974fc00eecb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 19:01:35.749690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779117802.mount: Deactivated successfully. Feb 9 19:01:35.759532 env[1709]: time="2024-02-09T19:01:35.759475354Z" level=info msg="CreateContainer within sandbox \"e7b7644e5cfcf761970b4bdb22f8934e5d1e6d1839a06564b5ce6974fc00eecb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7b68f7f0947b9dc802baf5f137268570e843849a31d7324725cf8b39423685e9\"" Feb 9 19:01:35.760801 env[1709]: time="2024-02-09T19:01:35.760707833Z" level=info msg="StartContainer for \"7b68f7f0947b9dc802baf5f137268570e843849a31d7324725cf8b39423685e9\"" Feb 9 19:01:35.831658 env[1709]: time="2024-02-09T19:01:35.831605201Z" level=info msg="StartContainer for \"7b68f7f0947b9dc802baf5f137268570e843849a31d7324725cf8b39423685e9\" returns successfully" Feb 9 19:01:36.031628 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 19:01:36.032357 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 19:01:37.612000 audit[4092]: AVC avc: denied { write } for pid=4092 comm="tee" name="fd" dev="proc" ino=25361 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:01:37.626867 kernel: audit: type=1400 audit(1707505297.612:285): avc: denied { write } for pid=4092 comm="tee" name="fd" dev="proc" ino=25361 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:01:37.627890 kernel: audit: type=1300 audit(1707505297.612:285): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcafd26987 a2=241 a3=1b6 items=1 ppid=4062 pid=4092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:37.627956 kernel: audit: type=1307 audit(1707505297.612:285): cwd="/etc/service/enabled/cni/log" Feb 9 19:01:37.612000 audit[4092]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcafd26987 a2=241 a3=1b6 items=1 ppid=4062 pid=4092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:37.612000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 19:01:37.634618 kernel: audit: type=1302 audit(1707505297.612:285): item=0 name="/dev/fd/63" inode=25338 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:37.612000 audit: PATH item=0 name="/dev/fd/63" inode=25338 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:37.639743 kernel: audit: type=1327 audit(1707505297.612:285): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:01:37.612000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:01:37.645654 kernel: audit: type=1400 audit(1707505297.619:286): avc: denied { write } for pid=4111 comm="tee" name="fd" dev="proc" ino=25370 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:01:37.619000 audit[4111]: AVC avc: denied { write } for pid=4111 comm="tee" name="fd" dev="proc" ino=25370 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:01:37.619000 audit[4111]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffec756d986 a2=241 a3=1b6 items=1 ppid=4071 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:37.658543 kernel: audit: type=1300 audit(1707505297.619:286): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffec756d986 a2=241 a3=1b6 items=1 ppid=4071 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:37.619000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 19:01:37.663529 kernel: audit: type=1307 audit(1707505297.619:286): cwd="/etc/service/enabled/bird/log" Feb 9 19:01:37.619000 audit: PATH item=0 name="/dev/fd/63" inode=25347 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:37.672610 kernel: audit: type=1302 audit(1707505297.619:286): item=0 name="/dev/fd/63" inode=25347 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:37.619000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:01:37.677531 kernel: audit: type=1327 audit(1707505297.619:286): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:01:37.658000 audit[4123]: AVC avc: denied { write } for pid=4123 comm="tee" name="fd" dev="proc" ino=25881 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:01:37.658000 audit[4123]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc84b88976 a2=241 a3=1b6 items=1 ppid=4060 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:37.658000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 19:01:37.658000 audit: PATH item=0 name="/dev/fd/63" inode=25367 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:37.658000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:01:37.692000 audit[4116]: AVC avc: denied { write } for pid=4116 comm="tee" name="fd" dev="proc" ino=25388 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:01:37.692000 audit[4116]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffed7756975 a2=241 a3=1b6 items=1 ppid=4064 pid=4116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:37.692000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 19:01:37.692000 audit: PATH item=0 name="/dev/fd/63" inode=25358 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:37.692000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:01:37.698000 audit[4119]: AVC avc: denied { write } for pid=4119 comm="tee" name="fd" dev="proc" ino=25392 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:01:37.698000 audit[4119]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeabec2985 a2=241 a3=1b6 items=1 ppid=4079 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:37.698000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 19:01:37.698000 audit: PATH item=0 name="/dev/fd/63" inode=25366 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:37.698000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:01:37.722000 audit[4128]: AVC avc: denied { write } for pid=4128 comm="tee" name="fd" dev="proc" ino=25909 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:01:37.729000 audit[4125]: AVC avc: denied { write } for pid=4125 comm="tee" name="fd" dev="proc" ino=25912 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:01:37.729000 audit[4125]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff6f426985 a2=241 a3=1b6 items=1 ppid=4068 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:37.729000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 19:01:37.729000 audit: PATH item=0 name="/dev/fd/63" inode=25374 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:37.729000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:01:37.722000 audit[4128]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff66546985 a2=241 a3=1b6 items=1 ppid=4069 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:37.722000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 19:01:37.722000 audit: PATH item=0 name="/dev/fd/63" inode=25378 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:37.722000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:01:40.195943 kubelet[2906]: I0209 19:01:40.195906 2906 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 19:01:40.278568 kubelet[2906]: I0209 19:01:40.276845 2906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-dcqp7" podStartSLOduration=-9.223372007585714e+09 pod.CreationTimestamp="2024-02-09 19:01:11 +0000 UTC" firstStartedPulling="2024-02-09 19:01:12.033475909 +0000 UTC m=+20.172423762" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:01:36.707918088 +0000 UTC m=+44.846865947" watchObservedRunningTime="2024-02-09 19:01:40.269060961 +0000 UTC m=+48.408008892" Feb 9 19:01:40.417376 env[1709]: time="2024-02-09T19:01:40.417241669Z" level=info msg="StopPodSandbox for \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\"" Feb 9 19:01:40.420691 env[1709]: time="2024-02-09T19:01:40.420651335Z" level=info msg="StopPodSandbox for \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\"" Feb 9 19:01:40.553000 audit[4289]: NETFILTER_CFG table=filter:111 family=2 entries=13 op=nft_register_rule pid=4289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:40.553000 audit[4289]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffc6d9943e0 a2=0 a3=7ffc6d9943cc items=0 ppid=3099 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:40.553000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:40.555000 audit[4289]: NETFILTER_CFG table=nat:112 family=2 entries=27 op=nft_register_chain pid=4289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:40.555000 audit[4289]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffc6d9943e0 a2=0 a3=7ffc6d9943cc items=0 ppid=3099 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:40.555000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:40.897784 env[1709]: 2024-02-09 19:01:40.605 [INFO][4255] k8s.go 578: Cleaning up netns ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Feb 9 19:01:40.897784 env[1709]: 2024-02-09 19:01:40.606 [INFO][4255] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" iface="eth0" netns="/var/run/netns/cni-b18b1b25-a62d-7acc-7adb-0a3fb70548d3" Feb 9 19:01:40.897784 env[1709]: 2024-02-09 19:01:40.607 [INFO][4255] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" iface="eth0" netns="/var/run/netns/cni-b18b1b25-a62d-7acc-7adb-0a3fb70548d3" Feb 9 19:01:40.897784 env[1709]: 2024-02-09 19:01:40.607 [INFO][4255] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" iface="eth0" netns="/var/run/netns/cni-b18b1b25-a62d-7acc-7adb-0a3fb70548d3" Feb 9 19:01:40.897784 env[1709]: 2024-02-09 19:01:40.607 [INFO][4255] k8s.go 585: Releasing IP address(es) ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Feb 9 19:01:40.897784 env[1709]: 2024-02-09 19:01:40.607 [INFO][4255] utils.go 188: Calico CNI releasing IP address ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Feb 9 19:01:40.897784 env[1709]: 2024-02-09 19:01:40.868 [INFO][4295] ipam_plugin.go 415: Releasing address using handleID ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" HandleID="k8s-pod-network.06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:40.897784 env[1709]: 2024-02-09 19:01:40.870 [INFO][4295] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:40.897784 env[1709]: 2024-02-09 19:01:40.870 [INFO][4295] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:40.897784 env[1709]: 2024-02-09 19:01:40.885 [WARNING][4295] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" HandleID="k8s-pod-network.06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:40.897784 env[1709]: 2024-02-09 19:01:40.885 [INFO][4295] ipam_plugin.go 443: Releasing address using workloadID ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" HandleID="k8s-pod-network.06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:40.897784 env[1709]: 2024-02-09 19:01:40.888 [INFO][4295] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:40.897784 env[1709]: 2024-02-09 19:01:40.892 [INFO][4255] k8s.go 591: Teardown processing complete. ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Feb 9 19:01:40.907862 systemd[1]: run-netns-cni\x2db18b1b25\x2da62d\x2d7acc\x2d7adb\x2d0a3fb70548d3.mount: Deactivated successfully. Feb 9 19:01:40.908718 env[1709]: time="2024-02-09T19:01:40.908670074Z" level=info msg="TearDown network for sandbox \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\" successfully" Feb 9 19:01:40.908854 env[1709]: time="2024-02-09T19:01:40.908834258Z" level=info msg="StopPodSandbox for \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\" returns successfully" Feb 9 19:01:40.910699 env[1709]: time="2024-02-09T19:01:40.910662758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ztsmf,Uid:2fa17834-5f19-4801-a2cd-cf352498e924,Namespace:kube-system,Attempt:1,}" Feb 9 19:01:40.917821 env[1709]: 2024-02-09 19:01:40.593 [INFO][4267] k8s.go 578: Cleaning up netns ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Feb 9 19:01:40.917821 env[1709]: 2024-02-09 19:01:40.602 [INFO][4267] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" iface="eth0" netns="/var/run/netns/cni-fffc3ee5-37c7-bbb1-4ef9-167328774132" Feb 9 19:01:40.917821 env[1709]: 2024-02-09 19:01:40.602 [INFO][4267] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" iface="eth0" netns="/var/run/netns/cni-fffc3ee5-37c7-bbb1-4ef9-167328774132" Feb 9 19:01:40.917821 env[1709]: 2024-02-09 19:01:40.604 [INFO][4267] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" iface="eth0" netns="/var/run/netns/cni-fffc3ee5-37c7-bbb1-4ef9-167328774132" Feb 9 19:01:40.917821 env[1709]: 2024-02-09 19:01:40.606 [INFO][4267] k8s.go 585: Releasing IP address(es) ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Feb 9 19:01:40.917821 env[1709]: 2024-02-09 19:01:40.606 [INFO][4267] utils.go 188: Calico CNI releasing IP address ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Feb 9 19:01:40.917821 env[1709]: 2024-02-09 19:01:40.868 [INFO][4294] ipam_plugin.go 415: Releasing address using handleID ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" HandleID="k8s-pod-network.0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Workload="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:40.917821 env[1709]: 2024-02-09 19:01:40.871 [INFO][4294] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:40.917821 env[1709]: 2024-02-09 19:01:40.888 [INFO][4294] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:40.917821 env[1709]: 2024-02-09 19:01:40.902 [WARNING][4294] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" HandleID="k8s-pod-network.0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Workload="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:40.917821 env[1709]: 2024-02-09 19:01:40.902 [INFO][4294] ipam_plugin.go 443: Releasing address using workloadID ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" HandleID="k8s-pod-network.0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Workload="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:40.917821 env[1709]: 2024-02-09 19:01:40.907 [INFO][4294] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:40.917821 env[1709]: 2024-02-09 19:01:40.914 [INFO][4267] k8s.go 591: Teardown processing complete. ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Feb 9 19:01:40.919293 env[1709]: time="2024-02-09T19:01:40.919241573Z" level=info msg="TearDown network for sandbox \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\" successfully" Feb 9 19:01:40.919431 env[1709]: time="2024-02-09T19:01:40.919397217Z" level=info msg="StopPodSandbox for \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\" returns successfully" Feb 9 19:01:40.924457 systemd[1]: run-netns-cni\x2dfffc3ee5\x2d37c7\x2dbbb1\x2d4ef9\x2d167328774132.mount: Deactivated successfully. Feb 9 19:01:40.925417 env[1709]: time="2024-02-09T19:01:40.925334752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78449db666-g6hg6,Uid:607248e1-9d7a-4a9e-9970-28f4dcfc35fc,Namespace:calico-system,Attempt:1,}" Feb 9 19:01:41.325814 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:01:41.326067 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali709f8c0876a: link becomes ready Feb 9 19:01:41.326677 (udev-worker)[4353]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:01:41.329042 systemd-networkd[1512]: cali709f8c0876a: Link UP Feb 9 19:01:41.329320 systemd-networkd[1512]: cali709f8c0876a: Gained carrier Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.051 [INFO][4317] utils.go 100: File /var/lib/calico/mtu does not exist Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.068 [INFO][4317] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0 calico-kube-controllers-78449db666- calico-system 607248e1-9d7a-4a9e-9970-28f4dcfc35fc 695 0 2024-02-09 19:01:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78449db666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-7 calico-kube-controllers-78449db666-g6hg6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali709f8c0876a [] []}} ContainerID="add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" Namespace="calico-system" Pod="calico-kube-controllers-78449db666-g6hg6" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-" Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.068 [INFO][4317] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" Namespace="calico-system" Pod="calico-kube-controllers-78449db666-g6hg6" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.168 [INFO][4329] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" HandleID="k8s-pod-network.add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" Workload="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.187 [INFO][4329] ipam_plugin.go 268: Auto assigning IP ContainerID="add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" HandleID="k8s-pod-network.add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" Workload="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bea90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-7", "pod":"calico-kube-controllers-78449db666-g6hg6", "timestamp":"2024-02-09 19:01:41.168831157 +0000 UTC"}, Hostname:"ip-172-31-19-7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.187 [INFO][4329] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.188 [INFO][4329] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.188 [INFO][4329] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-7' Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.199 [INFO][4329] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" host="ip-172-31-19-7" Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.222 [INFO][4329] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-7" Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.229 [INFO][4329] ipam.go 489: Trying affinity for 192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.233 [INFO][4329] ipam.go 155: Attempting to load block cidr=192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.239 [INFO][4329] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.239 [INFO][4329] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.64/26 handle="k8s-pod-network.add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" host="ip-172-31-19-7" Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.242 [INFO][4329] ipam.go 1682: Creating new handle: k8s-pod-network.add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7 Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.264 [INFO][4329] ipam.go 1203: Writing block in order to claim IPs block=192.168.39.64/26 handle="k8s-pod-network.add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" host="ip-172-31-19-7" Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.280 [INFO][4329] ipam.go 1216: Successfully claimed IPs: [192.168.39.65/26] block=192.168.39.64/26 handle="k8s-pod-network.add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" host="ip-172-31-19-7" Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.280 [INFO][4329] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.65/26] handle="k8s-pod-network.add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" host="ip-172-31-19-7" Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.286 [INFO][4329] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:41.367627 env[1709]: 2024-02-09 19:01:41.286 [INFO][4329] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.39.65/26] IPv6=[] ContainerID="add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" HandleID="k8s-pod-network.add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" Workload="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:41.368800 env[1709]: 2024-02-09 19:01:41.292 [INFO][4317] k8s.go 385: Populated endpoint ContainerID="add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" Namespace="calico-system" Pod="calico-kube-controllers-78449db666-g6hg6" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0", GenerateName:"calico-kube-controllers-78449db666-", Namespace:"calico-system", SelfLink:"", UID:"607248e1-9d7a-4a9e-9970-28f4dcfc35fc", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78449db666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"", Pod:"calico-kube-controllers-78449db666-g6hg6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.39.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali709f8c0876a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:41.368800 env[1709]: 2024-02-09 19:01:41.293 [INFO][4317] k8s.go 386: Calico CNI using IPs: [192.168.39.65/32] ContainerID="add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" Namespace="calico-system" Pod="calico-kube-controllers-78449db666-g6hg6" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:41.368800 env[1709]: 2024-02-09 19:01:41.293 [INFO][4317] dataplane_linux.go 68: Setting the host side veth name to cali709f8c0876a ContainerID="add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" Namespace="calico-system" Pod="calico-kube-controllers-78449db666-g6hg6" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:41.368800 env[1709]: 2024-02-09 19:01:41.350 [INFO][4317] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" Namespace="calico-system" Pod="calico-kube-controllers-78449db666-g6hg6" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:41.368800 env[1709]: 2024-02-09 19:01:41.350 [INFO][4317] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" Namespace="calico-system" Pod="calico-kube-controllers-78449db666-g6hg6" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0", GenerateName:"calico-kube-controllers-78449db666-", Namespace:"calico-system", SelfLink:"", UID:"607248e1-9d7a-4a9e-9970-28f4dcfc35fc", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78449db666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7", Pod:"calico-kube-controllers-78449db666-g6hg6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.39.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali709f8c0876a", MAC:"c2:16:ee:40:47:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:41.368800 env[1709]: 2024-02-09 19:01:41.362 [INFO][4317] k8s.go 491: Wrote updated endpoint to datastore ContainerID="add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7" Namespace="calico-system" Pod="calico-kube-controllers-78449db666-g6hg6" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:41.419411 env[1709]: time="2024-02-09T19:01:41.419355803Z" level=info msg="StopPodSandbox for \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\"" Feb 9 19:01:41.445267 systemd-networkd[1512]: cali5e07b897b0b: Link UP Feb 9 19:01:41.448234 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5e07b897b0b: link becomes ready Feb 9 19:01:41.447444 systemd-networkd[1512]: cali5e07b897b0b: Gained carrier Feb 9 19:01:41.478307 env[1709]: time="2024-02-09T19:01:41.477812878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:01:41.478307 env[1709]: time="2024-02-09T19:01:41.477921600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:01:41.478307 env[1709]: time="2024-02-09T19:01:41.477955637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:01:41.478307 env[1709]: time="2024-02-09T19:01:41.478188797Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7 pid=4389 runtime=io.containerd.runc.v2 Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.034 [INFO][4306] utils.go 100: File /var/lib/calico/mtu does not exist Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.068 [INFO][4306] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0 coredns-787d4945fb- kube-system 2fa17834-5f19-4801-a2cd-cf352498e924 696 0 2024-02-09 19:01:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-7 coredns-787d4945fb-ztsmf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5e07b897b0b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" Namespace="kube-system" Pod="coredns-787d4945fb-ztsmf" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-" Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.068 [INFO][4306] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" Namespace="kube-system" Pod="coredns-787d4945fb-ztsmf" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.190 [INFO][4330] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" HandleID="k8s-pod-network.9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.211 [INFO][4330] ipam_plugin.go 268: Auto assigning IP ContainerID="9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" HandleID="k8s-pod-network.9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bebd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-7", "pod":"coredns-787d4945fb-ztsmf", "timestamp":"2024-02-09 19:01:41.190087304 +0000 UTC"}, Hostname:"ip-172-31-19-7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.211 [INFO][4330] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.287 [INFO][4330] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.287 [INFO][4330] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-7' Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.298 [INFO][4330] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" host="ip-172-31-19-7" Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.304 [INFO][4330] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-7" Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.318 [INFO][4330] ipam.go 489: Trying affinity for 192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.341 [INFO][4330] ipam.go 155: Attempting to load block cidr=192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.352 [INFO][4330] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.360 [INFO][4330] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.64/26 handle="k8s-pod-network.9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" host="ip-172-31-19-7" Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.370 [INFO][4330] ipam.go 1682: Creating new handle: k8s-pod-network.9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.407 [INFO][4330] ipam.go 1203: Writing block in order to claim IPs block=192.168.39.64/26 handle="k8s-pod-network.9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" host="ip-172-31-19-7" Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.423 [INFO][4330] ipam.go 1216: Successfully claimed IPs: [192.168.39.66/26] block=192.168.39.64/26 handle="k8s-pod-network.9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" host="ip-172-31-19-7" Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.423 [INFO][4330] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.66/26] handle="k8s-pod-network.9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" host="ip-172-31-19-7" Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.424 [INFO][4330] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:41.486270 env[1709]: 2024-02-09 19:01:41.424 [INFO][4330] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.39.66/26] IPv6=[] ContainerID="9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" HandleID="k8s-pod-network.9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:41.487628 env[1709]: 2024-02-09 19:01:41.435 [INFO][4306] k8s.go 385: Populated endpoint ContainerID="9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" Namespace="kube-system" Pod="coredns-787d4945fb-ztsmf" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2fa17834-5f19-4801-a2cd-cf352498e924", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"", Pod:"coredns-787d4945fb-ztsmf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e07b897b0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:41.487628 env[1709]: 2024-02-09 19:01:41.435 [INFO][4306] k8s.go 386: Calico CNI using IPs: [192.168.39.66/32] ContainerID="9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" Namespace="kube-system" Pod="coredns-787d4945fb-ztsmf" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:41.487628 env[1709]: 2024-02-09 19:01:41.435 [INFO][4306] dataplane_linux.go 68: Setting the host side veth name to cali5e07b897b0b ContainerID="9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" Namespace="kube-system" Pod="coredns-787d4945fb-ztsmf" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:41.487628 env[1709]: 2024-02-09 19:01:41.453 [INFO][4306] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" Namespace="kube-system" Pod="coredns-787d4945fb-ztsmf" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:41.487628 env[1709]: 2024-02-09 19:01:41.454 [INFO][4306] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" Namespace="kube-system" Pod="coredns-787d4945fb-ztsmf" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2fa17834-5f19-4801-a2cd-cf352498e924", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d", Pod:"coredns-787d4945fb-ztsmf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e07b897b0b", MAC:"7e:a7:e8:d7:06:40", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:41.487628 env[1709]: 2024-02-09 19:01:41.479 [INFO][4306] k8s.go 491: Wrote updated endpoint to datastore ContainerID="9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d" Namespace="kube-system" Pod="coredns-787d4945fb-ztsmf" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:41.621300 env[1709]: time="2024-02-09T19:01:41.620792828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:01:41.621300 env[1709]: time="2024-02-09T19:01:41.620914940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:01:41.621300 env[1709]: time="2024-02-09T19:01:41.620948435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:01:41.630806 env[1709]: time="2024-02-09T19:01:41.630709630Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d pid=4441 runtime=io.containerd.runc.v2 Feb 9 19:01:41.844468 env[1709]: time="2024-02-09T19:01:41.844407557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ztsmf,Uid:2fa17834-5f19-4801-a2cd-cf352498e924,Namespace:kube-system,Attempt:1,} returns sandbox id \"9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d\"" Feb 9 19:01:41.847380 env[1709]: 2024-02-09 19:01:41.641 [INFO][4401] k8s.go 578: Cleaning up netns ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Feb 9 19:01:41.847380 env[1709]: 2024-02-09 19:01:41.641 [INFO][4401] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" iface="eth0" netns="/var/run/netns/cni-212ca483-c7a2-0043-aaab-28bc0758143f" Feb 9 19:01:41.847380 env[1709]: 2024-02-09 19:01:41.641 [INFO][4401] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" iface="eth0" netns="/var/run/netns/cni-212ca483-c7a2-0043-aaab-28bc0758143f" Feb 9 19:01:41.847380 env[1709]: 2024-02-09 19:01:41.642 [INFO][4401] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" iface="eth0" netns="/var/run/netns/cni-212ca483-c7a2-0043-aaab-28bc0758143f" Feb 9 19:01:41.847380 env[1709]: 2024-02-09 19:01:41.642 [INFO][4401] k8s.go 585: Releasing IP address(es) ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Feb 9 19:01:41.847380 env[1709]: 2024-02-09 19:01:41.642 [INFO][4401] utils.go 188: Calico CNI releasing IP address ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Feb 9 19:01:41.847380 env[1709]: 2024-02-09 19:01:41.816 [INFO][4460] ipam_plugin.go 415: Releasing address using handleID ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" HandleID="k8s-pod-network.a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Workload="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:41.847380 env[1709]: 2024-02-09 19:01:41.816 [INFO][4460] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:41.847380 env[1709]: 2024-02-09 19:01:41.816 [INFO][4460] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:41.847380 env[1709]: 2024-02-09 19:01:41.834 [WARNING][4460] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" HandleID="k8s-pod-network.a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Workload="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:41.847380 env[1709]: 2024-02-09 19:01:41.834 [INFO][4460] ipam_plugin.go 443: Releasing address using workloadID ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" HandleID="k8s-pod-network.a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Workload="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:41.847380 env[1709]: 2024-02-09 19:01:41.840 [INFO][4460] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:41.847380 env[1709]: 2024-02-09 19:01:41.843 [INFO][4401] k8s.go 591: Teardown processing complete. ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Feb 9 19:01:41.848945 env[1709]: time="2024-02-09T19:01:41.848911855Z" level=info msg="TearDown network for sandbox \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\" successfully" Feb 9 19:01:41.849446 env[1709]: time="2024-02-09T19:01:41.849070011Z" level=info msg="StopPodSandbox for \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\" returns successfully" Feb 9 19:01:41.864280 env[1709]: time="2024-02-09T19:01:41.864225783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sfvlp,Uid:196d1d2d-b701-4a86-946e-fecee4636cf4,Namespace:calico-system,Attempt:1,}" Feb 9 19:01:41.865540 env[1709]: time="2024-02-09T19:01:41.865459160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78449db666-g6hg6,Uid:607248e1-9d7a-4a9e-9970-28f4dcfc35fc,Namespace:calico-system,Attempt:1,} returns sandbox id \"add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7\"" Feb 9 19:01:41.877615 env[1709]: time="2024-02-09T19:01:41.877466547Z" level=info msg="CreateContainer within sandbox \"9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:01:41.878899 env[1709]: time="2024-02-09T19:01:41.878163367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 9 19:01:41.922351 systemd[1]: run-netns-cni\x2d212ca483\x2dc7a2\x2d0043\x2daaab\x2d28bc0758143f.mount: Deactivated successfully. Feb 9 19:01:41.958262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1147064018.mount: Deactivated successfully. Feb 9 19:01:42.007456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2377734080.mount: Deactivated successfully. Feb 9 19:01:42.008108 env[1709]: time="2024-02-09T19:01:42.008061384Z" level=info msg="CreateContainer within sandbox \"9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9f03eb2f8ea46513ef1efe447bb18ee7b44eee5d0ea2fea113717efd91cd01d\"" Feb 9 19:01:42.011593 env[1709]: time="2024-02-09T19:01:42.010866456Z" level=info msg="StartContainer for \"f9f03eb2f8ea46513ef1efe447bb18ee7b44eee5d0ea2fea113717efd91cd01d\"" Feb 9 19:01:42.227474 env[1709]: time="2024-02-09T19:01:42.227364022Z" level=info msg="StartContainer for \"f9f03eb2f8ea46513ef1efe447bb18ee7b44eee5d0ea2fea113717efd91cd01d\" returns successfully" Feb 9 19:01:42.231391 (udev-worker)[4357]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit: BPF prog-id=10 op=LOAD Feb 9 19:01:42.267000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffecb093400 a2=70 a3=7ff973414000 items=0 ppid=4347 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:01:42.267000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit: BPF prog-id=11 op=LOAD Feb 9 19:01:42.267000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffecb093400 a2=70 a3=6e items=0 ppid=4347 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:01:42.267000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffecb0933b0 a2=70 a3=7ffecb093400 items=0 ppid=4347 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit: BPF prog-id=12 op=LOAD Feb 9 19:01:42.267000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffecb093390 a2=70 a3=7ffecb093400 items=0 ppid=4347 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:01:42.267000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffecb093470 a2=70 a3=0 items=0 ppid=4347 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffecb093460 a2=70 a3=0 items=0 ppid=4347 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:01:42.267000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.267000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffecb0934a0 a2=70 a3=0 items=0 ppid=4347 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:01:42.268000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.268000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.268000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.268000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.268000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.268000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.268000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.268000 audit[4580]: AVC avc: denied { perfmon } for pid=4580 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.268000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.268000 audit[4580]: AVC avc: denied { bpf } for pid=4580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.268000 audit: BPF prog-id=13 op=LOAD Feb 9 19:01:42.268000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffecb0933c0 a2=70 a3=ffffffff items=0 ppid=4347 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.268000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:01:42.288000 audit[4584]: AVC avc: denied { bpf } for pid=4584 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.288000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffcab5b38a0 a2=70 a3=fff80800 items=0 ppid=4347 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.288000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:01:42.288000 audit[4584]: AVC avc: denied { bpf } for pid=4584 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:01:42.288000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffcab5b3770 a2=70 a3=3 items=0 ppid=4347 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.288000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:01:42.299000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:01:42.330391 (udev-worker)[4581]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:01:42.337428 systemd-networkd[1512]: calibaf62cc3e91: Link UP Feb 9 19:01:42.351896 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibaf62cc3e91: link becomes ready Feb 9 19:01:42.350732 systemd-networkd[1512]: calibaf62cc3e91: Gained carrier Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.081 [INFO][4514] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0 csi-node-driver- calico-system 196d1d2d-b701-4a86-946e-fecee4636cf4 705 0 2024-02-09 19:01:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-19-7 csi-node-driver-sfvlp eth0 default [] [] [kns.calico-system ksa.calico-system.default] calibaf62cc3e91 [] []}} ContainerID="c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" Namespace="calico-system" Pod="csi-node-driver-sfvlp" WorkloadEndpoint="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-" Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.082 [INFO][4514] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" Namespace="calico-system" Pod="csi-node-driver-sfvlp" WorkloadEndpoint="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.221 [INFO][4548] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" HandleID="k8s-pod-network.c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" Workload="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.254 [INFO][4548] ipam_plugin.go 268: Auto assigning IP ContainerID="c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" HandleID="k8s-pod-network.c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" Workload="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027cc50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-7", "pod":"csi-node-driver-sfvlp", "timestamp":"2024-02-09 19:01:42.221388566 +0000 UTC"}, Hostname:"ip-172-31-19-7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.254 [INFO][4548] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.254 [INFO][4548] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.254 [INFO][4548] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-7' Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.262 [INFO][4548] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" host="ip-172-31-19-7" Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.278 [INFO][4548] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-7" Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.293 [INFO][4548] ipam.go 489: Trying affinity for 192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.296 [INFO][4548] ipam.go 155: Attempting to load block cidr=192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.300 [INFO][4548] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.300 [INFO][4548] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.64/26 handle="k8s-pod-network.c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" host="ip-172-31-19-7" Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.303 [INFO][4548] ipam.go 1682: Creating new handle: k8s-pod-network.c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63 Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.311 [INFO][4548] ipam.go 1203: Writing block in order to claim IPs block=192.168.39.64/26 handle="k8s-pod-network.c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" host="ip-172-31-19-7" Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.322 [INFO][4548] ipam.go 1216: Successfully claimed IPs: [192.168.39.67/26] block=192.168.39.64/26 handle="k8s-pod-network.c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" host="ip-172-31-19-7" Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.322 [INFO][4548] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.67/26] handle="k8s-pod-network.c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" host="ip-172-31-19-7" Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.322 [INFO][4548] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:42.378605 env[1709]: 2024-02-09 19:01:42.322 [INFO][4548] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.39.67/26] IPv6=[] ContainerID="c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" HandleID="k8s-pod-network.c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" Workload="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:42.379719 env[1709]: 2024-02-09 19:01:42.326 [INFO][4514] k8s.go 385: Populated endpoint ContainerID="c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" Namespace="calico-system" Pod="csi-node-driver-sfvlp" WorkloadEndpoint="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"196d1d2d-b701-4a86-946e-fecee4636cf4", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"", Pod:"csi-node-driver-sfvlp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.39.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibaf62cc3e91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:42.379719 env[1709]: 2024-02-09 19:01:42.327 [INFO][4514] k8s.go 386: Calico CNI using IPs: [192.168.39.67/32] ContainerID="c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" Namespace="calico-system" Pod="csi-node-driver-sfvlp" WorkloadEndpoint="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:42.379719 env[1709]: 2024-02-09 19:01:42.327 [INFO][4514] dataplane_linux.go 68: Setting the host side veth name to calibaf62cc3e91 ContainerID="c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" Namespace="calico-system" Pod="csi-node-driver-sfvlp" WorkloadEndpoint="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:42.379719 env[1709]: 2024-02-09 19:01:42.346 [INFO][4514] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" Namespace="calico-system" Pod="csi-node-driver-sfvlp" WorkloadEndpoint="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:42.379719 env[1709]: 2024-02-09 19:01:42.347 [INFO][4514] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" Namespace="calico-system" Pod="csi-node-driver-sfvlp" WorkloadEndpoint="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"196d1d2d-b701-4a86-946e-fecee4636cf4", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63", Pod:"csi-node-driver-sfvlp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.39.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibaf62cc3e91", MAC:"96:ad:dc:d1:61:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:42.379719 env[1709]: 2024-02-09 19:01:42.373 [INFO][4514] k8s.go 491: Wrote updated endpoint to datastore ContainerID="c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63" Namespace="calico-system" Pod="csi-node-driver-sfvlp" WorkloadEndpoint="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:42.410683 env[1709]: time="2024-02-09T19:01:42.405764923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:01:42.410683 env[1709]: time="2024-02-09T19:01:42.405823577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:01:42.410683 env[1709]: time="2024-02-09T19:01:42.405840118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:01:42.410683 env[1709]: time="2024-02-09T19:01:42.406040191Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63 pid=4614 runtime=io.containerd.runc.v2 Feb 9 19:01:42.528022 env[1709]: time="2024-02-09T19:01:42.527913763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sfvlp,Uid:196d1d2d-b701-4a86-946e-fecee4636cf4,Namespace:calico-system,Attempt:1,} returns sandbox id \"c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63\"" Feb 9 19:01:42.619000 audit[4666]: NETFILTER_CFG table=mangle:113 family=2 entries=19 op=nft_register_chain pid=4666 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:01:42.623528 kernel: kauditd_printk_skb: 102 callbacks suppressed Feb 9 19:01:42.623622 kernel: audit: type=1325 audit(1707505302.619:308): table=mangle:113 family=2 entries=19 op=nft_register_chain pid=4666 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:01:42.619000 audit[4666]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffed1fc3d90 a2=0 a3=7ffed1fc3d7c items=0 ppid=4347 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.634718 kernel: audit: type=1300 audit(1707505302.619:308): arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffed1fc3d90 a2=0 a3=7ffed1fc3d7c items=0 ppid=4347 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.639117 kernel: audit: type=1327 audit(1707505302.619:308): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:01:42.619000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:01:42.639000 audit[4665]: NETFILTER_CFG table=raw:114 family=2 entries=19 op=nft_register_chain pid=4665 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:01:42.648639 kernel: audit: type=1325 audit(1707505302.639:309): table=raw:114 family=2 entries=19 op=nft_register_chain pid=4665 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:01:42.639000 audit[4665]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffc9f523ff0 a2=0 a3=7ffc9f523fdc items=0 ppid=4347 pid=4665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.657789 kernel: audit: type=1300 audit(1707505302.639:309): arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffc9f523ff0 a2=0 a3=7ffc9f523fdc items=0 ppid=4347 pid=4665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.661898 kernel: audit: type=1327 audit(1707505302.639:309): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:01:42.639000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:01:42.663000 audit[4669]: NETFILTER_CFG table=nat:115 family=2 entries=16 op=nft_register_chain pid=4669 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:01:42.679653 kernel: audit: type=1325 audit(1707505302.663:310): table=nat:115 family=2 entries=16 op=nft_register_chain pid=4669 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:01:42.663000 audit[4669]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffde1e65360 a2=0 a3=7ffde1e6534c items=0 ppid=4347 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.697530 kernel: audit: type=1300 audit(1707505302.663:310): arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffde1e65360 a2=0 a3=7ffde1e6534c items=0 ppid=4347 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.709597 systemd-networkd[1512]: vxlan.calico: Link UP Feb 9 19:01:42.709610 systemd-networkd[1512]: vxlan.calico: Gained carrier Feb 9 19:01:42.663000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:01:42.688000 audit[4667]: NETFILTER_CFG table=filter:116 family=2 entries=103 op=nft_register_chain pid=4667 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:01:42.688000 audit[4667]: SYSCALL arch=c000003e syscall=46 success=yes exit=54800 a0=3 a1=7ffc38b23a50 a2=0 a3=561efed88000 items=0 ppid=4347 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.688000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:01:42.721762 kernel: audit: type=1327 audit(1707505302.663:310): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:01:42.721839 kernel: audit: type=1325 audit(1707505302.688:311): table=filter:116 family=2 entries=103 op=nft_register_chain pid=4667 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:01:42.851818 systemd-networkd[1512]: cali709f8c0876a: Gained IPv6LL Feb 9 19:01:42.932000 audit[4689]: NETFILTER_CFG table=filter:117 family=2 entries=38 op=nft_register_chain pid=4689 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:01:42.932000 audit[4689]: SYSCALL arch=c000003e syscall=46 success=yes exit=19508 a0=3 a1=7ffccf92cc00 a2=0 a3=7ffccf92cbec items=0 ppid=4347 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:42.932000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:01:43.018000 audit[4709]: NETFILTER_CFG table=filter:118 family=2 entries=12 op=nft_register_rule pid=4709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:43.018000 audit[4709]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffdf8b37dd0 a2=0 a3=7ffdf8b37dbc items=0 ppid=3099 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:43.018000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:43.020000 audit[4709]: NETFILTER_CFG table=nat:119 family=2 entries=30 op=nft_register_rule pid=4709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:43.020000 audit[4709]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffdf8b37dd0 a2=0 a3=7ffdf8b37dbc items=0 ppid=3099 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:43.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:43.043827 systemd-networkd[1512]: cali5e07b897b0b: Gained IPv6LL Feb 9 19:01:43.184791 systemd[1]: run-containerd-runc-k8s.io-7b68f7f0947b9dc802baf5f137268570e843849a31d7324725cf8b39423685e9-runc.UkFYQZ.mount: Deactivated successfully. Feb 9 19:01:43.364710 kubelet[2906]: I0209 19:01:43.363884 2906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-ztsmf" podStartSLOduration=38.363833641 pod.CreationTimestamp="2024-02-09 19:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:01:42.763272961 +0000 UTC m=+50.902220820" watchObservedRunningTime="2024-02-09 19:01:43.363833641 +0000 UTC m=+51.502781494" Feb 9 19:01:43.416939 env[1709]: time="2024-02-09T19:01:43.416885473Z" level=info msg="StopPodSandbox for \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\"" Feb 9 19:01:43.604755 env[1709]: 2024-02-09 19:01:43.517 [INFO][4745] k8s.go 578: Cleaning up netns ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Feb 9 19:01:43.604755 env[1709]: 2024-02-09 19:01:43.517 [INFO][4745] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" iface="eth0" netns="/var/run/netns/cni-9d1d50ba-61cb-798b-b6b0-fd945c86e028" Feb 9 19:01:43.604755 env[1709]: 2024-02-09 19:01:43.517 [INFO][4745] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" iface="eth0" netns="/var/run/netns/cni-9d1d50ba-61cb-798b-b6b0-fd945c86e028" Feb 9 19:01:43.604755 env[1709]: 2024-02-09 19:01:43.518 [INFO][4745] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" iface="eth0" netns="/var/run/netns/cni-9d1d50ba-61cb-798b-b6b0-fd945c86e028" Feb 9 19:01:43.604755 env[1709]: 2024-02-09 19:01:43.518 [INFO][4745] k8s.go 585: Releasing IP address(es) ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Feb 9 19:01:43.604755 env[1709]: 2024-02-09 19:01:43.518 [INFO][4745] utils.go 188: Calico CNI releasing IP address ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Feb 9 19:01:43.604755 env[1709]: 2024-02-09 19:01:43.571 [INFO][4752] ipam_plugin.go 415: Releasing address using handleID ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" HandleID="k8s-pod-network.642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:43.604755 env[1709]: 2024-02-09 19:01:43.571 [INFO][4752] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:43.604755 env[1709]: 2024-02-09 19:01:43.571 [INFO][4752] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:43.604755 env[1709]: 2024-02-09 19:01:43.590 [WARNING][4752] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" HandleID="k8s-pod-network.642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:43.604755 env[1709]: 2024-02-09 19:01:43.590 [INFO][4752] ipam_plugin.go 443: Releasing address using workloadID ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" HandleID="k8s-pod-network.642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:43.604755 env[1709]: 2024-02-09 19:01:43.593 [INFO][4752] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:43.604755 env[1709]: 2024-02-09 19:01:43.602 [INFO][4745] k8s.go 591: Teardown processing complete. ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Feb 9 19:01:43.606850 env[1709]: time="2024-02-09T19:01:43.605593969Z" level=info msg="TearDown network for sandbox \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\" successfully" Feb 9 19:01:43.606850 env[1709]: time="2024-02-09T19:01:43.605632218Z" level=info msg="StopPodSandbox for \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\" returns successfully" Feb 9 19:01:43.607269 env[1709]: time="2024-02-09T19:01:43.607235903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ttftx,Uid:91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a,Namespace:kube-system,Attempt:1,}" Feb 9 19:01:43.619789 systemd-networkd[1512]: calibaf62cc3e91: Gained IPv6LL Feb 9 19:01:43.874688 systemd-networkd[1512]: vxlan.calico: Gained IPv6LL Feb 9 19:01:43.912214 systemd[1]: run-netns-cni\x2d9d1d50ba\x2d61cb\x2d798b\x2db6b0\x2dfd945c86e028.mount: Deactivated successfully. Feb 9 19:01:44.097000 audit[4800]: NETFILTER_CFG table=filter:120 family=2 entries=9 op=nft_register_rule pid=4800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:44.097000 audit[4800]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc70fa3250 a2=0 a3=7ffc70fa323c items=0 ppid=3099 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:44.097000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:44.101000 audit[4800]: NETFILTER_CFG table=nat:121 family=2 entries=51 op=nft_register_chain pid=4800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:44.101000 audit[4800]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffc70fa3250 a2=0 a3=7ffc70fa323c items=0 ppid=3099 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:44.101000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:44.196913 systemd-networkd[1512]: calic8dbe614170: Link UP Feb 9 19:01:44.209921 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:01:44.211216 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic8dbe614170: link becomes ready Feb 9 19:01:44.210268 systemd-networkd[1512]: calic8dbe614170: Gained carrier Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:43.890 [INFO][4758] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0 coredns-787d4945fb- kube-system 91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a 725 0 2024-02-09 19:01:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-7 coredns-787d4945fb-ttftx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic8dbe614170 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" Namespace="kube-system" Pod="coredns-787d4945fb-ttftx" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-" Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:43.890 [INFO][4758] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" Namespace="kube-system" Pod="coredns-787d4945fb-ttftx" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.101 [INFO][4779] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" HandleID="k8s-pod-network.f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.122 [INFO][4779] ipam_plugin.go 268: Auto assigning IP ContainerID="f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" HandleID="k8s-pod-network.f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00049fa00), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-7", "pod":"coredns-787d4945fb-ttftx", "timestamp":"2024-02-09 19:01:44.101944466 +0000 UTC"}, Hostname:"ip-172-31-19-7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.122 [INFO][4779] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.123 [INFO][4779] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.123 [INFO][4779] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-7' Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.129 [INFO][4779] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" host="ip-172-31-19-7" Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.149 [INFO][4779] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-7" Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.154 [INFO][4779] ipam.go 489: Trying affinity for 192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.156 [INFO][4779] ipam.go 155: Attempting to load block cidr=192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.163 [INFO][4779] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.163 [INFO][4779] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.64/26 handle="k8s-pod-network.f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" host="ip-172-31-19-7" Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.171 [INFO][4779] ipam.go 1682: Creating new handle: k8s-pod-network.f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4 Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.178 [INFO][4779] ipam.go 1203: Writing block in order to claim IPs block=192.168.39.64/26 handle="k8s-pod-network.f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" host="ip-172-31-19-7" Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.184 [INFO][4779] ipam.go 1216: Successfully claimed IPs: [192.168.39.68/26] block=192.168.39.64/26 handle="k8s-pod-network.f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" host="ip-172-31-19-7" Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.184 [INFO][4779] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.68/26] handle="k8s-pod-network.f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" host="ip-172-31-19-7" Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.184 [INFO][4779] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:44.234102 env[1709]: 2024-02-09 19:01:44.185 [INFO][4779] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.39.68/26] IPv6=[] ContainerID="f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" HandleID="k8s-pod-network.f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:44.238006 env[1709]: 2024-02-09 19:01:44.190 [INFO][4758] k8s.go 385: Populated endpoint ContainerID="f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" Namespace="kube-system" Pod="coredns-787d4945fb-ttftx" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"", Pod:"coredns-787d4945fb-ttftx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8dbe614170", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:44.238006 env[1709]: 2024-02-09 19:01:44.191 [INFO][4758] k8s.go 386: Calico CNI using IPs: [192.168.39.68/32] ContainerID="f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" Namespace="kube-system" Pod="coredns-787d4945fb-ttftx" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:44.238006 env[1709]: 2024-02-09 19:01:44.191 [INFO][4758] dataplane_linux.go 68: Setting the host side veth name to calic8dbe614170 ContainerID="f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" Namespace="kube-system" Pod="coredns-787d4945fb-ttftx" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:44.238006 env[1709]: 2024-02-09 19:01:44.211 [INFO][4758] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" Namespace="kube-system" Pod="coredns-787d4945fb-ttftx" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:44.238006 env[1709]: 2024-02-09 19:01:44.212 [INFO][4758] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" Namespace="kube-system" Pod="coredns-787d4945fb-ttftx" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4", Pod:"coredns-787d4945fb-ttftx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8dbe614170", MAC:"96:1a:3e:81:25:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:44.238006 env[1709]: 2024-02-09 19:01:44.230 [INFO][4758] k8s.go 491: Wrote updated endpoint to datastore ContainerID="f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4" Namespace="kube-system" Pod="coredns-787d4945fb-ttftx" WorkloadEndpoint="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:44.288000 audit[4817]: NETFILTER_CFG table=filter:122 family=2 entries=38 op=nft_register_chain pid=4817 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:01:44.288000 audit[4817]: SYSCALL arch=c000003e syscall=46 success=yes exit=19088 a0=3 a1=7fff26c22630 a2=0 a3=7fff26c2261c items=0 ppid=4347 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:44.288000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:01:44.329578 env[1709]: time="2024-02-09T19:01:44.329426020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:01:44.329578 env[1709]: time="2024-02-09T19:01:44.329501249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:01:44.329578 env[1709]: time="2024-02-09T19:01:44.329531477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:01:44.330241 env[1709]: time="2024-02-09T19:01:44.330202073Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4 pid=4823 runtime=io.containerd.runc.v2 Feb 9 19:01:44.478152 env[1709]: time="2024-02-09T19:01:44.476558950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ttftx,Uid:91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a,Namespace:kube-system,Attempt:1,} returns sandbox id \"f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4\"" Feb 9 19:01:44.484892 env[1709]: time="2024-02-09T19:01:44.484852585Z" level=info msg="CreateContainer within sandbox \"f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:01:44.508440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104692391.mount: Deactivated successfully. Feb 9 19:01:44.522661 env[1709]: time="2024-02-09T19:01:44.522613524Z" level=info msg="CreateContainer within sandbox \"f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"074871340b7db67da9354434acb81d17b2d413033597bd5296a419701e5f375a\"" Feb 9 19:01:44.526759 env[1709]: time="2024-02-09T19:01:44.526705402Z" level=info msg="StartContainer for \"074871340b7db67da9354434acb81d17b2d413033597bd5296a419701e5f375a\"" Feb 9 19:01:44.713796 env[1709]: time="2024-02-09T19:01:44.713730403Z" level=info msg="StartContainer for \"074871340b7db67da9354434acb81d17b2d413033597bd5296a419701e5f375a\" returns successfully" Feb 9 19:01:45.730772 systemd-networkd[1512]: calic8dbe614170: Gained IPv6LL Feb 9 19:01:45.870564 kubelet[2906]: I0209 19:01:45.870250 2906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-ttftx" podStartSLOduration=40.870199749 pod.CreationTimestamp="2024-02-09 19:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:01:45.84895731 +0000 UTC m=+53.987905169" watchObservedRunningTime="2024-02-09 19:01:45.870199749 +0000 UTC m=+54.009147607" Feb 9 19:01:46.024000 audit[4920]: NETFILTER_CFG table=filter:123 family=2 entries=6 op=nft_register_rule pid=4920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:46.024000 audit[4920]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc35bc7410 a2=0 a3=7ffc35bc73fc items=0 ppid=3099 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:46.024000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:46.031000 audit[4920]: NETFILTER_CFG table=nat:124 family=2 entries=60 op=nft_register_rule pid=4920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:46.031000 audit[4920]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffc35bc7410 a2=0 a3=7ffc35bc73fc items=0 ppid=3099 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:46.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:46.268000 audit[4946]: NETFILTER_CFG table=filter:125 family=2 entries=6 op=nft_register_rule pid=4946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:46.268000 audit[4946]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff22a2f090 a2=0 a3=7fff22a2f07c items=0 ppid=3099 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:46.268000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:46.301000 audit[4946]: NETFILTER_CFG table=nat:126 family=2 entries=72 op=nft_register_chain pid=4946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:46.301000 audit[4946]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7fff22a2f090 a2=0 a3=7fff22a2f07c items=0 ppid=3099 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:46.301000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:46.530314 env[1709]: time="2024-02-09T19:01:46.530257740Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:46.535427 env[1709]: time="2024-02-09T19:01:46.535395447Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:46.542945 env[1709]: time="2024-02-09T19:01:46.542883919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:46.547058 env[1709]: time="2024-02-09T19:01:46.547003410Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:46.549816 env[1709]: time="2024-02-09T19:01:46.549758929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803\"" Feb 9 19:01:46.554725 env[1709]: time="2024-02-09T19:01:46.554671213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 19:01:46.602286 env[1709]: time="2024-02-09T19:01:46.601439817Z" level=info msg="CreateContainer within sandbox \"add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 9 19:01:46.643200 env[1709]: time="2024-02-09T19:01:46.641956282Z" level=info msg="CreateContainer within sandbox \"add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d1b341d8aceb126cadae0fc8420c502e061bd827544986cf6e326aa32a1a04a9\"" Feb 9 19:01:46.643616 env[1709]: time="2024-02-09T19:01:46.643579466Z" level=info msg="StartContainer for \"d1b341d8aceb126cadae0fc8420c502e061bd827544986cf6e326aa32a1a04a9\"" Feb 9 19:01:46.850819 env[1709]: time="2024-02-09T19:01:46.850773495Z" level=info msg="StartContainer for \"d1b341d8aceb126cadae0fc8420c502e061bd827544986cf6e326aa32a1a04a9\" returns successfully" Feb 9 19:01:47.762240 kubelet[2906]: I0209 19:01:47.762086 2906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78449db666-g6hg6" podStartSLOduration=-9.22337200009274e+09 pod.CreationTimestamp="2024-02-09 19:01:11 +0000 UTC" firstStartedPulling="2024-02-09 19:01:41.876269797 +0000 UTC m=+50.015217641" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:01:47.761320513 +0000 UTC m=+55.900268371" watchObservedRunningTime="2024-02-09 19:01:47.762036029 +0000 UTC m=+55.900983889" Feb 9 19:01:47.955670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1431231683.mount: Deactivated successfully. Feb 9 19:01:48.624406 env[1709]: time="2024-02-09T19:01:48.624371143Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:48.628051 env[1709]: time="2024-02-09T19:01:48.627997724Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:48.632922 env[1709]: time="2024-02-09T19:01:48.632880746Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:48.636370 env[1709]: time="2024-02-09T19:01:48.636329707Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:48.637452 env[1709]: time="2024-02-09T19:01:48.637414083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 9 19:01:48.645818 env[1709]: time="2024-02-09T19:01:48.645354061Z" level=info msg="CreateContainer within sandbox \"c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 19:01:48.681789 env[1709]: time="2024-02-09T19:01:48.679113952Z" level=info msg="CreateContainer within sandbox \"c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"93771a955d0fd6c02dc087a0b7ff4e647344709564b50d39c54f81c5b2fc298e\"" Feb 9 19:01:48.682883 env[1709]: time="2024-02-09T19:01:48.682829793Z" level=info msg="StartContainer for \"93771a955d0fd6c02dc087a0b7ff4e647344709564b50d39c54f81c5b2fc298e\"" Feb 9 19:01:48.765165 systemd[1]: run-containerd-runc-k8s.io-93771a955d0fd6c02dc087a0b7ff4e647344709564b50d39c54f81c5b2fc298e-runc.cqdZBr.mount: Deactivated successfully. Feb 9 19:01:48.855762 env[1709]: time="2024-02-09T19:01:48.855701633Z" level=info msg="StartContainer for \"93771a955d0fd6c02dc087a0b7ff4e647344709564b50d39c54f81c5b2fc298e\" returns successfully" Feb 9 19:01:48.858424 env[1709]: time="2024-02-09T19:01:48.858302206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 19:01:50.700472 env[1709]: time="2024-02-09T19:01:50.700417512Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:50.704005 env[1709]: time="2024-02-09T19:01:50.703958133Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:50.707003 env[1709]: time="2024-02-09T19:01:50.706952494Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:50.710039 env[1709]: time="2024-02-09T19:01:50.709991950Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:50.710927 env[1709]: time="2024-02-09T19:01:50.710890094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 9 19:01:50.713439 env[1709]: time="2024-02-09T19:01:50.713393200Z" level=info msg="CreateContainer within sandbox \"c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 19:01:50.745211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3842135404.mount: Deactivated successfully. Feb 9 19:01:50.749809 env[1709]: time="2024-02-09T19:01:50.749763424Z" level=info msg="CreateContainer within sandbox \"c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"51d5abbd5c8a176289f674da0db395008169ea27b476a299b24dca2324ce5e14\"" Feb 9 19:01:50.750430 env[1709]: time="2024-02-09T19:01:50.750396649Z" level=info msg="StartContainer for \"51d5abbd5c8a176289f674da0db395008169ea27b476a299b24dca2324ce5e14\"" Feb 9 19:01:50.810207 systemd[1]: run-containerd-runc-k8s.io-51d5abbd5c8a176289f674da0db395008169ea27b476a299b24dca2324ce5e14-runc.ttaIkc.mount: Deactivated successfully. Feb 9 19:01:50.902179 env[1709]: time="2024-02-09T19:01:50.902138052Z" level=info msg="StartContainer for \"51d5abbd5c8a176289f674da0db395008169ea27b476a299b24dca2324ce5e14\" returns successfully" Feb 9 19:01:51.200937 kubelet[2906]: I0209 19:01:51.200890 2906 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:01:51.210032 kubelet[2906]: I0209 19:01:51.210003 2906 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:01:51.255440 kubelet[2906]: I0209 19:01:51.255407 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/00941537-21dd-4a7a-8c68-9ebeed83da86-calico-apiserver-certs\") pod \"calico-apiserver-59fbf65d7-hkxpp\" (UID: \"00941537-21dd-4a7a-8c68-9ebeed83da86\") " pod="calico-apiserver/calico-apiserver-59fbf65d7-hkxpp" Feb 9 19:01:51.255852 kubelet[2906]: I0209 19:01:51.255832 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dc24791c-61f8-4dc5-8442-209a6f3756c5-calico-apiserver-certs\") pod \"calico-apiserver-59fbf65d7-mkmkb\" (UID: \"dc24791c-61f8-4dc5-8442-209a6f3756c5\") " pod="calico-apiserver/calico-apiserver-59fbf65d7-mkmkb" Feb 9 19:01:51.256596 kubelet[2906]: I0209 19:01:51.256574 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvhr9\" (UniqueName: \"kubernetes.io/projected/dc24791c-61f8-4dc5-8442-209a6f3756c5-kube-api-access-vvhr9\") pod \"calico-apiserver-59fbf65d7-mkmkb\" (UID: \"dc24791c-61f8-4dc5-8442-209a6f3756c5\") " pod="calico-apiserver/calico-apiserver-59fbf65d7-mkmkb" Feb 9 19:01:51.256839 kubelet[2906]: I0209 19:01:51.256825 2906 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4wq7\" (UniqueName: \"kubernetes.io/projected/00941537-21dd-4a7a-8c68-9ebeed83da86-kube-api-access-n4wq7\") pod \"calico-apiserver-59fbf65d7-hkxpp\" (UID: \"00941537-21dd-4a7a-8c68-9ebeed83da86\") " pod="calico-apiserver/calico-apiserver-59fbf65d7-hkxpp" Feb 9 19:01:51.361828 kubelet[2906]: E0209 19:01:51.361793 2906 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 19:01:51.362716 kubelet[2906]: E0209 19:01:51.362142 2906 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 19:01:51.365453 kubelet[2906]: E0209 19:01:51.365430 2906 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc24791c-61f8-4dc5-8442-209a6f3756c5-calico-apiserver-certs podName:dc24791c-61f8-4dc5-8442-209a6f3756c5 nodeName:}" failed. No retries permitted until 2024-02-09 19:01:51.864056555 +0000 UTC m=+60.003004410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dc24791c-61f8-4dc5-8442-209a6f3756c5-calico-apiserver-certs") pod "calico-apiserver-59fbf65d7-mkmkb" (UID: "dc24791c-61f8-4dc5-8442-209a6f3756c5") : secret "calico-apiserver-certs" not found Feb 9 19:01:51.365729 kubelet[2906]: E0209 19:01:51.365705 2906 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00941537-21dd-4a7a-8c68-9ebeed83da86-calico-apiserver-certs podName:00941537-21dd-4a7a-8c68-9ebeed83da86 nodeName:}" failed. No retries permitted until 2024-02-09 19:01:51.86568052 +0000 UTC m=+60.004628372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/00941537-21dd-4a7a-8c68-9ebeed83da86-calico-apiserver-certs") pod "calico-apiserver-59fbf65d7-hkxpp" (UID: "00941537-21dd-4a7a-8c68-9ebeed83da86") : secret "calico-apiserver-certs" not found Feb 9 19:01:51.373638 kernel: kauditd_printk_skb: 32 callbacks suppressed Feb 9 19:01:51.373778 kernel: audit: type=1325 audit(1707505311.367:322): table=filter:127 family=2 entries=7 op=nft_register_rule pid=5103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:51.381538 kernel: audit: type=1300 audit(1707505311.367:322): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffcc15d9f60 a2=0 a3=7ffcc15d9f4c items=0 ppid=3099 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:51.367000 audit[5103]: NETFILTER_CFG table=filter:127 family=2 entries=7 op=nft_register_rule pid=5103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:51.367000 audit[5103]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffcc15d9f60 a2=0 a3=7ffcc15d9f4c items=0 ppid=3099 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:51.367000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:51.399551 kernel: audit: type=1327 audit(1707505311.367:322): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:51.401000 audit[5103]: NETFILTER_CFG table=nat:128 family=2 entries=78 op=nft_register_rule pid=5103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:51.406601 kernel: audit: type=1325 audit(1707505311.401:323): table=nat:128 family=2 entries=78 op=nft_register_rule pid=5103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:51.401000 audit[5103]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffcc15d9f60 a2=0 a3=7ffcc15d9f4c items=0 ppid=3099 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:51.415543 kernel: audit: type=1300 audit(1707505311.401:323): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffcc15d9f60 a2=0 a3=7ffcc15d9f4c items=0 ppid=3099 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:51.401000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:51.424714 kernel: audit: type=1327 audit(1707505311.401:323): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:51.456490 kubelet[2906]: I0209 19:01:51.456370 2906 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 19:01:51.457493 kubelet[2906]: I0209 19:01:51.457399 2906 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 19:01:51.564000 audit[5131]: NETFILTER_CFG table=filter:129 family=2 entries=8 op=nft_register_rule pid=5131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:51.564000 audit[5131]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffcd1a63ab0 a2=0 a3=7ffcd1a63a9c items=0 ppid=3099 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:51.579201 kernel: audit: type=1325 audit(1707505311.564:324): table=filter:129 family=2 entries=8 op=nft_register_rule pid=5131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:51.579350 kernel: audit: type=1300 audit(1707505311.564:324): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffcd1a63ab0 a2=0 a3=7ffcd1a63a9c items=0 ppid=3099 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:51.564000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:51.582945 kernel: audit: type=1327 audit(1707505311.564:324): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:51.574000 audit[5131]: NETFILTER_CFG table=nat:130 family=2 entries=78 op=nft_register_rule pid=5131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:51.574000 audit[5131]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffcd1a63ab0 a2=0 a3=7ffcd1a63a9c items=0 ppid=3099 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:51.590644 kernel: audit: type=1325 audit(1707505311.574:325): table=nat:130 family=2 entries=78 op=nft_register_rule pid=5131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:51.574000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:52.012591 kubelet[2906]: I0209 19:01:52.012561 2906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-sfvlp" podStartSLOduration=-9.223371995842285e+09 pod.CreationTimestamp="2024-02-09 19:01:11 +0000 UTC" firstStartedPulling="2024-02-09 19:01:42.529905704 +0000 UTC m=+50.668853553" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:01:52.011771179 +0000 UTC m=+60.150719038" watchObservedRunningTime="2024-02-09 19:01:52.012490192 +0000 UTC m=+60.151438052" Feb 9 19:01:52.098833 env[1709]: time="2024-02-09T19:01:52.098789907Z" level=info msg="StopPodSandbox for \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\"" Feb 9 19:01:52.109039 env[1709]: time="2024-02-09T19:01:52.108995429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fbf65d7-mkmkb,Uid:dc24791c-61f8-4dc5-8442-209a6f3756c5,Namespace:calico-apiserver,Attempt:0,}" Feb 9 19:01:52.135046 env[1709]: time="2024-02-09T19:01:52.134995785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fbf65d7-hkxpp,Uid:00941537-21dd-4a7a-8c68-9ebeed83da86,Namespace:calico-apiserver,Attempt:0,}" Feb 9 19:01:52.480819 env[1709]: 2024-02-09 19:01:52.302 [WARNING][5154] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0", GenerateName:"calico-kube-controllers-78449db666-", Namespace:"calico-system", SelfLink:"", UID:"607248e1-9d7a-4a9e-9970-28f4dcfc35fc", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78449db666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7", Pod:"calico-kube-controllers-78449db666-g6hg6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.39.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali709f8c0876a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:52.480819 env[1709]: 2024-02-09 19:01:52.303 [INFO][5154] k8s.go 578: Cleaning up netns ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Feb 9 19:01:52.480819 env[1709]: 2024-02-09 19:01:52.303 [INFO][5154] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" iface="eth0" netns="" Feb 9 19:01:52.480819 env[1709]: 2024-02-09 19:01:52.305 [INFO][5154] k8s.go 585: Releasing IP address(es) ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Feb 9 19:01:52.480819 env[1709]: 2024-02-09 19:01:52.305 [INFO][5154] utils.go 188: Calico CNI releasing IP address ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Feb 9 19:01:52.480819 env[1709]: 2024-02-09 19:01:52.448 [INFO][5177] ipam_plugin.go 415: Releasing address using handleID ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" HandleID="k8s-pod-network.0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Workload="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:52.480819 env[1709]: 2024-02-09 19:01:52.450 [INFO][5177] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:52.480819 env[1709]: 2024-02-09 19:01:52.450 [INFO][5177] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:52.480819 env[1709]: 2024-02-09 19:01:52.460 [WARNING][5177] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" HandleID="k8s-pod-network.0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Workload="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:52.480819 env[1709]: 2024-02-09 19:01:52.460 [INFO][5177] ipam_plugin.go 443: Releasing address using workloadID ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" HandleID="k8s-pod-network.0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Workload="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:52.480819 env[1709]: 2024-02-09 19:01:52.463 [INFO][5177] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:52.480819 env[1709]: 2024-02-09 19:01:52.475 [INFO][5154] k8s.go 591: Teardown processing complete. ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Feb 9 19:01:52.480819 env[1709]: time="2024-02-09T19:01:52.480233584Z" level=info msg="TearDown network for sandbox \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\" successfully" Feb 9 19:01:52.480819 env[1709]: time="2024-02-09T19:01:52.480269546Z" level=info msg="StopPodSandbox for \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\" returns successfully" Feb 9 19:01:52.483179 env[1709]: time="2024-02-09T19:01:52.481194083Z" level=info msg="RemovePodSandbox for \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\"" Feb 9 19:01:52.483179 env[1709]: time="2024-02-09T19:01:52.481230904Z" level=info msg="Forcibly stopping sandbox \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\"" Feb 9 19:01:52.564632 (udev-worker)[5212]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:01:52.566085 systemd-networkd[1512]: cali56c38188811: Link UP Feb 9 19:01:52.577384 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:01:52.578934 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali56c38188811: link becomes ready Feb 9 19:01:52.579084 systemd-networkd[1512]: cali56c38188811: Gained carrier Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.256 [INFO][5142] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--mkmkb-eth0 calico-apiserver-59fbf65d7- calico-apiserver dc24791c-61f8-4dc5-8442-209a6f3756c5 829 0 2024-02-09 19:01:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59fbf65d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-7 calico-apiserver-59fbf65d7-mkmkb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali56c38188811 [] []}} ContainerID="16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-mkmkb" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--mkmkb-" Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.257 [INFO][5142] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-mkmkb" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--mkmkb-eth0" Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.465 [INFO][5175] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" HandleID="k8s-pod-network.16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" Workload="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--mkmkb-eth0" Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.483 [INFO][5175] ipam_plugin.go 268: Auto assigning IP ContainerID="16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" HandleID="k8s-pod-network.16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" Workload="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--mkmkb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-7", "pod":"calico-apiserver-59fbf65d7-mkmkb", "timestamp":"2024-02-09 19:01:52.465666265 +0000 UTC"}, Hostname:"ip-172-31-19-7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.483 [INFO][5175] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.483 [INFO][5175] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.483 [INFO][5175] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-7' Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.486 [INFO][5175] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" host="ip-172-31-19-7" Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.491 [INFO][5175] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-7" Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.497 [INFO][5175] ipam.go 489: Trying affinity for 192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.510 [INFO][5175] ipam.go 155: Attempting to load block cidr=192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.520 [INFO][5175] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.520 [INFO][5175] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.64/26 handle="k8s-pod-network.16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" host="ip-172-31-19-7" Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.527 [INFO][5175] ipam.go 1682: Creating new handle: k8s-pod-network.16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8 Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.540 [INFO][5175] ipam.go 1203: Writing block in order to claim IPs block=192.168.39.64/26 handle="k8s-pod-network.16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" host="ip-172-31-19-7" Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.550 [INFO][5175] ipam.go 1216: Successfully claimed IPs: [192.168.39.69/26] block=192.168.39.64/26 handle="k8s-pod-network.16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" host="ip-172-31-19-7" Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.550 [INFO][5175] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.69/26] handle="k8s-pod-network.16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" host="ip-172-31-19-7" Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.550 [INFO][5175] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:52.610426 env[1709]: 2024-02-09 19:01:52.550 [INFO][5175] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.39.69/26] IPv6=[] ContainerID="16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" HandleID="k8s-pod-network.16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" Workload="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--mkmkb-eth0" Feb 9 19:01:52.613906 env[1709]: 2024-02-09 19:01:52.553 [INFO][5142] k8s.go 385: Populated endpoint ContainerID="16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-mkmkb" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--mkmkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--mkmkb-eth0", GenerateName:"calico-apiserver-59fbf65d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc24791c-61f8-4dc5-8442-209a6f3756c5", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59fbf65d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"", Pod:"calico-apiserver-59fbf65d7-mkmkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56c38188811", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:52.613906 env[1709]: 2024-02-09 19:01:52.553 [INFO][5142] k8s.go 386: Calico CNI using IPs: [192.168.39.69/32] ContainerID="16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-mkmkb" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--mkmkb-eth0" Feb 9 19:01:52.613906 env[1709]: 2024-02-09 19:01:52.554 [INFO][5142] dataplane_linux.go 68: Setting the host side veth name to cali56c38188811 ContainerID="16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-mkmkb" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--mkmkb-eth0" Feb 9 19:01:52.613906 env[1709]: 2024-02-09 19:01:52.581 [INFO][5142] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-mkmkb" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--mkmkb-eth0" Feb 9 19:01:52.613906 env[1709]: 2024-02-09 19:01:52.582 [INFO][5142] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-mkmkb" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--mkmkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--mkmkb-eth0", GenerateName:"calico-apiserver-59fbf65d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc24791c-61f8-4dc5-8442-209a6f3756c5", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59fbf65d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8", Pod:"calico-apiserver-59fbf65d7-mkmkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56c38188811", MAC:"fa:06:85:63:79:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:52.613906 env[1709]: 2024-02-09 19:01:52.605 [INFO][5142] k8s.go 491: Wrote updated endpoint to datastore ContainerID="16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-mkmkb" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--mkmkb-eth0" Feb 9 19:01:52.636905 (udev-worker)[5216]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:01:52.642200 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali469ef3dc70d: link becomes ready Feb 9 19:01:52.642649 systemd-networkd[1512]: cali469ef3dc70d: Link UP Feb 9 19:01:52.643109 systemd-networkd[1512]: cali469ef3dc70d: Gained carrier Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.371 [INFO][5155] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--hkxpp-eth0 calico-apiserver-59fbf65d7- calico-apiserver 00941537-21dd-4a7a-8c68-9ebeed83da86 832 0 2024-02-09 19:01:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59fbf65d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-7 calico-apiserver-59fbf65d7-hkxpp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali469ef3dc70d [] []}} ContainerID="109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-hkxpp" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--hkxpp-" Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.372 [INFO][5155] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-hkxpp" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--hkxpp-eth0" Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.519 [INFO][5186] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" HandleID="k8s-pod-network.109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" Workload="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--hkxpp-eth0" Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.554 [INFO][5186] ipam_plugin.go 268: Auto assigning IP ContainerID="109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" HandleID="k8s-pod-network.109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" Workload="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--hkxpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-7", "pod":"calico-apiserver-59fbf65d7-hkxpp", "timestamp":"2024-02-09 19:01:52.518896007 +0000 UTC"}, Hostname:"ip-172-31-19-7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.554 [INFO][5186] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.555 [INFO][5186] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.555 [INFO][5186] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-7' Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.557 [INFO][5186] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" host="ip-172-31-19-7" Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.568 [INFO][5186] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-7" Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.577 [INFO][5186] ipam.go 489: Trying affinity for 192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.581 [INFO][5186] ipam.go 155: Attempting to load block cidr=192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.588 [INFO][5186] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.39.64/26 host="ip-172-31-19-7" Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.588 [INFO][5186] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.39.64/26 handle="k8s-pod-network.109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" host="ip-172-31-19-7" Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.606 [INFO][5186] ipam.go 1682: Creating new handle: k8s-pod-network.109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1 Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.614 [INFO][5186] ipam.go 1203: Writing block in order to claim IPs block=192.168.39.64/26 handle="k8s-pod-network.109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" host="ip-172-31-19-7" Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.626 [INFO][5186] ipam.go 1216: Successfully claimed IPs: [192.168.39.70/26] block=192.168.39.64/26 handle="k8s-pod-network.109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" host="ip-172-31-19-7" Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.626 [INFO][5186] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.39.70/26] handle="k8s-pod-network.109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" host="ip-172-31-19-7" Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.626 [INFO][5186] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:52.694433 env[1709]: 2024-02-09 19:01:52.626 [INFO][5186] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.39.70/26] IPv6=[] ContainerID="109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" HandleID="k8s-pod-network.109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" Workload="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--hkxpp-eth0" Feb 9 19:01:52.696201 env[1709]: 2024-02-09 19:01:52.629 [INFO][5155] k8s.go 385: Populated endpoint ContainerID="109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-hkxpp" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--hkxpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--hkxpp-eth0", GenerateName:"calico-apiserver-59fbf65d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"00941537-21dd-4a7a-8c68-9ebeed83da86", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59fbf65d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"", Pod:"calico-apiserver-59fbf65d7-hkxpp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali469ef3dc70d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:52.696201 env[1709]: 2024-02-09 19:01:52.629 [INFO][5155] k8s.go 386: Calico CNI using IPs: [192.168.39.70/32] ContainerID="109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-hkxpp" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--hkxpp-eth0" Feb 9 19:01:52.696201 env[1709]: 2024-02-09 19:01:52.629 [INFO][5155] dataplane_linux.go 68: Setting the host side veth name to cali469ef3dc70d ContainerID="109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-hkxpp" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--hkxpp-eth0" Feb 9 19:01:52.696201 env[1709]: 2024-02-09 19:01:52.641 [INFO][5155] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-hkxpp" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--hkxpp-eth0" Feb 9 19:01:52.696201 env[1709]: 2024-02-09 19:01:52.645 [INFO][5155] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-hkxpp" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--hkxpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--hkxpp-eth0", GenerateName:"calico-apiserver-59fbf65d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"00941537-21dd-4a7a-8c68-9ebeed83da86", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59fbf65d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1", Pod:"calico-apiserver-59fbf65d7-hkxpp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali469ef3dc70d", MAC:"a6:5c:32:23:9e:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:52.696201 env[1709]: 2024-02-09 19:01:52.682 [INFO][5155] k8s.go 491: Wrote updated endpoint to datastore ContainerID="109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1" Namespace="calico-apiserver" Pod="calico-apiserver-59fbf65d7-hkxpp" WorkloadEndpoint="ip--172--31--19--7-k8s-calico--apiserver--59fbf65d7--hkxpp-eth0" Feb 9 19:01:52.696000 audit[5239]: NETFILTER_CFG table=filter:131 family=2 entries=65 op=nft_register_chain pid=5239 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:01:52.696000 audit[5239]: SYSCALL arch=c000003e syscall=46 success=yes exit=32144 a0=3 a1=7ffe10569380 a2=0 a3=7ffe1056936c items=0 ppid=4347 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:52.696000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:01:52.762137 env[1709]: time="2024-02-09T19:01:52.751761620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:01:52.762137 env[1709]: time="2024-02-09T19:01:52.751809026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:01:52.762137 env[1709]: time="2024-02-09T19:01:52.751825997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:01:52.762137 env[1709]: time="2024-02-09T19:01:52.751973537Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8 pid=5257 runtime=io.containerd.runc.v2 Feb 9 19:01:52.796000 audit[5280]: NETFILTER_CFG table=filter:132 family=2 entries=46 op=nft_register_chain pid=5280 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:01:52.796000 audit[5280]: SYSCALL arch=c000003e syscall=46 success=yes exit=23292 a0=3 a1=7ffca1a4b350 a2=0 a3=7ffca1a4b33c items=0 ppid=4347 pid=5280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:52.796000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:01:52.811099 env[1709]: time="2024-02-09T19:01:52.811020520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:01:52.811263 env[1709]: time="2024-02-09T19:01:52.811113108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:01:52.811263 env[1709]: time="2024-02-09T19:01:52.811144066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:01:52.811424 env[1709]: time="2024-02-09T19:01:52.811384667Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1 pid=5292 runtime=io.containerd.runc.v2 Feb 9 19:01:52.862021 systemd[1]: run-containerd-runc-k8s.io-109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1-runc.SInPdA.mount: Deactivated successfully. Feb 9 19:01:52.959018 env[1709]: time="2024-02-09T19:01:52.958966872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fbf65d7-mkmkb,Uid:dc24791c-61f8-4dc5-8442-209a6f3756c5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8\"" Feb 9 19:01:52.963910 env[1709]: time="2024-02-09T19:01:52.963853998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 19:01:52.975578 env[1709]: 2024-02-09 19:01:52.785 [WARNING][5215] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0", GenerateName:"calico-kube-controllers-78449db666-", Namespace:"calico-system", SelfLink:"", UID:"607248e1-9d7a-4a9e-9970-28f4dcfc35fc", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78449db666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"add3667226ff6d52a34c0fa1f0ce7852032b86882dbaec0a3f7551aef714dcf7", Pod:"calico-kube-controllers-78449db666-g6hg6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.39.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali709f8c0876a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:52.975578 env[1709]: 2024-02-09 19:01:52.786 [INFO][5215] k8s.go 578: Cleaning up netns ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Feb 9 19:01:52.975578 env[1709]: 2024-02-09 19:01:52.786 [INFO][5215] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" iface="eth0" netns="" Feb 9 19:01:52.975578 env[1709]: 2024-02-09 19:01:52.786 [INFO][5215] k8s.go 585: Releasing IP address(es) ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Feb 9 19:01:52.975578 env[1709]: 2024-02-09 19:01:52.786 [INFO][5215] utils.go 188: Calico CNI releasing IP address ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Feb 9 19:01:52.975578 env[1709]: 2024-02-09 19:01:52.924 [INFO][5278] ipam_plugin.go 415: Releasing address using handleID ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" HandleID="k8s-pod-network.0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Workload="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:52.975578 env[1709]: 2024-02-09 19:01:52.925 [INFO][5278] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:52.975578 env[1709]: 2024-02-09 19:01:52.942 [INFO][5278] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:52.975578 env[1709]: 2024-02-09 19:01:52.961 [WARNING][5278] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" HandleID="k8s-pod-network.0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Workload="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:52.975578 env[1709]: 2024-02-09 19:01:52.961 [INFO][5278] ipam_plugin.go 443: Releasing address using workloadID ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" HandleID="k8s-pod-network.0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Workload="ip--172--31--19--7-k8s-calico--kube--controllers--78449db666--g6hg6-eth0" Feb 9 19:01:52.975578 env[1709]: 2024-02-09 19:01:52.968 [INFO][5278] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:52.975578 env[1709]: 2024-02-09 19:01:52.973 [INFO][5215] k8s.go 591: Teardown processing complete. ContainerID="0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649" Feb 9 19:01:52.976855 env[1709]: time="2024-02-09T19:01:52.976814391Z" level=info msg="TearDown network for sandbox \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\" successfully" Feb 9 19:01:52.984925 env[1709]: time="2024-02-09T19:01:52.984889295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fbf65d7-hkxpp,Uid:00941537-21dd-4a7a-8c68-9ebeed83da86,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1\"" Feb 9 19:01:52.993610 env[1709]: time="2024-02-09T19:01:52.992618054Z" level=info msg="RemovePodSandbox \"0817c23e7462945d9f9a7972bfd49cc1620eb53e19c2e407b200e285be1d3649\" returns successfully" Feb 9 19:01:52.995629 env[1709]: time="2024-02-09T19:01:52.995577440Z" level=info msg="StopPodSandbox for \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\"" Feb 9 19:01:53.105198 env[1709]: 2024-02-09 19:01:53.047 [WARNING][5360] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"196d1d2d-b701-4a86-946e-fecee4636cf4", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63", Pod:"csi-node-driver-sfvlp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.39.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibaf62cc3e91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:53.105198 env[1709]: 2024-02-09 19:01:53.047 [INFO][5360] k8s.go 578: Cleaning up netns ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Feb 9 19:01:53.105198 env[1709]: 2024-02-09 19:01:53.047 [INFO][5360] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" iface="eth0" netns="" Feb 9 19:01:53.105198 env[1709]: 2024-02-09 19:01:53.047 [INFO][5360] k8s.go 585: Releasing IP address(es) ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Feb 9 19:01:53.105198 env[1709]: 2024-02-09 19:01:53.047 [INFO][5360] utils.go 188: Calico CNI releasing IP address ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Feb 9 19:01:53.105198 env[1709]: 2024-02-09 19:01:53.080 [INFO][5366] ipam_plugin.go 415: Releasing address using handleID ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" HandleID="k8s-pod-network.a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Workload="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:53.105198 env[1709]: 2024-02-09 19:01:53.080 [INFO][5366] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:53.105198 env[1709]: 2024-02-09 19:01:53.080 [INFO][5366] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:53.105198 env[1709]: 2024-02-09 19:01:53.094 [WARNING][5366] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" HandleID="k8s-pod-network.a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Workload="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:53.105198 env[1709]: 2024-02-09 19:01:53.094 [INFO][5366] ipam_plugin.go 443: Releasing address using workloadID ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" HandleID="k8s-pod-network.a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Workload="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:53.105198 env[1709]: 2024-02-09 19:01:53.097 [INFO][5366] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:53.105198 env[1709]: 2024-02-09 19:01:53.103 [INFO][5360] k8s.go 591: Teardown processing complete. ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Feb 9 19:01:53.105198 env[1709]: time="2024-02-09T19:01:53.105220913Z" level=info msg="TearDown network for sandbox \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\" successfully" Feb 9 19:01:53.105198 env[1709]: time="2024-02-09T19:01:53.105249887Z" level=info msg="StopPodSandbox for \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\" returns successfully" Feb 9 19:01:53.107389 env[1709]: time="2024-02-09T19:01:53.106698405Z" level=info msg="RemovePodSandbox for \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\"" Feb 9 19:01:53.107389 env[1709]: time="2024-02-09T19:01:53.106726627Z" level=info msg="Forcibly stopping sandbox \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\"" Feb 9 19:01:53.214500 env[1709]: 2024-02-09 19:01:53.166 [WARNING][5385] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"196d1d2d-b701-4a86-946e-fecee4636cf4", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"c48c9130704f7c9c50f210bcff9cdaf7fbcff1eab5fa2809764a81a766952c63", Pod:"csi-node-driver-sfvlp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.39.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibaf62cc3e91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:53.214500 env[1709]: 2024-02-09 19:01:53.167 [INFO][5385] k8s.go 578: Cleaning up netns ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Feb 9 19:01:53.214500 env[1709]: 2024-02-09 19:01:53.167 [INFO][5385] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" iface="eth0" netns="" Feb 9 19:01:53.214500 env[1709]: 2024-02-09 19:01:53.167 [INFO][5385] k8s.go 585: Releasing IP address(es) ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Feb 9 19:01:53.214500 env[1709]: 2024-02-09 19:01:53.167 [INFO][5385] utils.go 188: Calico CNI releasing IP address ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Feb 9 19:01:53.214500 env[1709]: 2024-02-09 19:01:53.198 [INFO][5392] ipam_plugin.go 415: Releasing address using handleID ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" HandleID="k8s-pod-network.a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Workload="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:53.214500 env[1709]: 2024-02-09 19:01:53.198 [INFO][5392] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:53.214500 env[1709]: 2024-02-09 19:01:53.198 [INFO][5392] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:53.214500 env[1709]: 2024-02-09 19:01:53.207 [WARNING][5392] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" HandleID="k8s-pod-network.a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Workload="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:53.214500 env[1709]: 2024-02-09 19:01:53.207 [INFO][5392] ipam_plugin.go 443: Releasing address using workloadID ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" HandleID="k8s-pod-network.a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Workload="ip--172--31--19--7-k8s-csi--node--driver--sfvlp-eth0" Feb 9 19:01:53.214500 env[1709]: 2024-02-09 19:01:53.211 [INFO][5392] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:53.214500 env[1709]: 2024-02-09 19:01:53.212 [INFO][5385] k8s.go 591: Teardown processing complete. ContainerID="a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a" Feb 9 19:01:53.215535 env[1709]: time="2024-02-09T19:01:53.214552324Z" level=info msg="TearDown network for sandbox \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\" successfully" Feb 9 19:01:53.225344 env[1709]: time="2024-02-09T19:01:53.225291814Z" level=info msg="RemovePodSandbox \"a8a54689b4cbc9c8b85c89ea620da7ee46a5babee681f5b16defb8c3618def6a\" returns successfully" Feb 9 19:01:53.226131 env[1709]: time="2024-02-09T19:01:53.226093450Z" level=info msg="StopPodSandbox for \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\"" Feb 9 19:01:53.329279 env[1709]: 2024-02-09 19:01:53.282 [WARNING][5412] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2fa17834-5f19-4801-a2cd-cf352498e924", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d", Pod:"coredns-787d4945fb-ztsmf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e07b897b0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:53.329279 env[1709]: 2024-02-09 19:01:53.283 [INFO][5412] k8s.go 578: Cleaning up netns ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Feb 9 19:01:53.329279 env[1709]: 2024-02-09 19:01:53.283 [INFO][5412] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" iface="eth0" netns="" Feb 9 19:01:53.329279 env[1709]: 2024-02-09 19:01:53.283 [INFO][5412] k8s.go 585: Releasing IP address(es) ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Feb 9 19:01:53.329279 env[1709]: 2024-02-09 19:01:53.283 [INFO][5412] utils.go 188: Calico CNI releasing IP address ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Feb 9 19:01:53.329279 env[1709]: 2024-02-09 19:01:53.309 [INFO][5419] ipam_plugin.go 415: Releasing address using handleID ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" HandleID="k8s-pod-network.06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:53.329279 env[1709]: 2024-02-09 19:01:53.309 [INFO][5419] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:53.329279 env[1709]: 2024-02-09 19:01:53.309 [INFO][5419] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:53.329279 env[1709]: 2024-02-09 19:01:53.319 [WARNING][5419] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" HandleID="k8s-pod-network.06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:53.329279 env[1709]: 2024-02-09 19:01:53.319 [INFO][5419] ipam_plugin.go 443: Releasing address using workloadID ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" HandleID="k8s-pod-network.06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:53.329279 env[1709]: 2024-02-09 19:01:53.322 [INFO][5419] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:53.329279 env[1709]: 2024-02-09 19:01:53.325 [INFO][5412] k8s.go 591: Teardown processing complete. ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Feb 9 19:01:53.331913 env[1709]: time="2024-02-09T19:01:53.329317227Z" level=info msg="TearDown network for sandbox \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\" successfully" Feb 9 19:01:53.331913 env[1709]: time="2024-02-09T19:01:53.329354698Z" level=info msg="StopPodSandbox for \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\" returns successfully" Feb 9 19:01:53.331913 env[1709]: time="2024-02-09T19:01:53.331264825Z" level=info msg="RemovePodSandbox for \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\"" Feb 9 19:01:53.331913 env[1709]: time="2024-02-09T19:01:53.331335306Z" level=info msg="Forcibly stopping sandbox \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\"" Feb 9 19:01:53.434646 env[1709]: 2024-02-09 19:01:53.384 [WARNING][5437] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2fa17834-5f19-4801-a2cd-cf352498e924", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"9fa2a74fc65141821cb7eb58642ffb36184a27a379b4b87856ec62a005956d8d", Pod:"coredns-787d4945fb-ztsmf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e07b897b0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:53.434646 env[1709]: 2024-02-09 19:01:53.385 [INFO][5437] k8s.go 578: Cleaning up netns ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Feb 9 19:01:53.434646 env[1709]: 2024-02-09 19:01:53.385 [INFO][5437] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" iface="eth0" netns="" Feb 9 19:01:53.434646 env[1709]: 2024-02-09 19:01:53.385 [INFO][5437] k8s.go 585: Releasing IP address(es) ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Feb 9 19:01:53.434646 env[1709]: 2024-02-09 19:01:53.385 [INFO][5437] utils.go 188: Calico CNI releasing IP address ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Feb 9 19:01:53.434646 env[1709]: 2024-02-09 19:01:53.419 [INFO][5443] ipam_plugin.go 415: Releasing address using handleID ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" HandleID="k8s-pod-network.06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:53.434646 env[1709]: 2024-02-09 19:01:53.420 [INFO][5443] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:53.434646 env[1709]: 2024-02-09 19:01:53.420 [INFO][5443] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:53.434646 env[1709]: 2024-02-09 19:01:53.429 [WARNING][5443] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" HandleID="k8s-pod-network.06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:53.434646 env[1709]: 2024-02-09 19:01:53.429 [INFO][5443] ipam_plugin.go 443: Releasing address using workloadID ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" HandleID="k8s-pod-network.06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ztsmf-eth0" Feb 9 19:01:53.434646 env[1709]: 2024-02-09 19:01:53.431 [INFO][5443] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:53.434646 env[1709]: 2024-02-09 19:01:53.433 [INFO][5437] k8s.go 591: Teardown processing complete. ContainerID="06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5" Feb 9 19:01:53.436473 env[1709]: time="2024-02-09T19:01:53.434603517Z" level=info msg="TearDown network for sandbox \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\" successfully" Feb 9 19:01:53.441970 env[1709]: time="2024-02-09T19:01:53.441920209Z" level=info msg="RemovePodSandbox \"06c2be044364f1104fe58c1ee35f145f7d37d34a4d76c2b7eed0eb8dca0627c5\" returns successfully" Feb 9 19:01:53.442623 env[1709]: time="2024-02-09T19:01:53.442451248Z" level=info msg="StopPodSandbox for \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\"" Feb 9 19:01:53.533365 env[1709]: 2024-02-09 19:01:53.491 [WARNING][5461] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4", Pod:"coredns-787d4945fb-ttftx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8dbe614170", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:53.533365 env[1709]: 2024-02-09 19:01:53.491 [INFO][5461] k8s.go 578: Cleaning up netns ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Feb 9 19:01:53.533365 env[1709]: 2024-02-09 19:01:53.491 [INFO][5461] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" iface="eth0" netns="" Feb 9 19:01:53.533365 env[1709]: 2024-02-09 19:01:53.491 [INFO][5461] k8s.go 585: Releasing IP address(es) ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Feb 9 19:01:53.533365 env[1709]: 2024-02-09 19:01:53.491 [INFO][5461] utils.go 188: Calico CNI releasing IP address ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Feb 9 19:01:53.533365 env[1709]: 2024-02-09 19:01:53.520 [INFO][5467] ipam_plugin.go 415: Releasing address using handleID ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" HandleID="k8s-pod-network.642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:53.533365 env[1709]: 2024-02-09 19:01:53.520 [INFO][5467] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:53.533365 env[1709]: 2024-02-09 19:01:53.520 [INFO][5467] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:53.533365 env[1709]: 2024-02-09 19:01:53.527 [WARNING][5467] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" HandleID="k8s-pod-network.642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:53.533365 env[1709]: 2024-02-09 19:01:53.528 [INFO][5467] ipam_plugin.go 443: Releasing address using workloadID ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" HandleID="k8s-pod-network.642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:53.533365 env[1709]: 2024-02-09 19:01:53.530 [INFO][5467] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:53.533365 env[1709]: 2024-02-09 19:01:53.531 [INFO][5461] k8s.go 591: Teardown processing complete. ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Feb 9 19:01:53.535103 env[1709]: time="2024-02-09T19:01:53.533408509Z" level=info msg="TearDown network for sandbox \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\" successfully" Feb 9 19:01:53.535103 env[1709]: time="2024-02-09T19:01:53.533441989Z" level=info msg="StopPodSandbox for \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\" returns successfully" Feb 9 19:01:53.535103 env[1709]: time="2024-02-09T19:01:53.534949599Z" level=info msg="RemovePodSandbox for \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\"" Feb 9 19:01:53.535103 env[1709]: time="2024-02-09T19:01:53.534992896Z" level=info msg="Forcibly stopping sandbox \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\"" Feb 9 19:01:53.672392 env[1709]: 2024-02-09 19:01:53.606 [WARNING][5486] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"91c852c3-24a9-4f7a-b9bf-6a6e2d9f5e9a", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 1, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-7", ContainerID:"f40315cf91c0505cefa4da32353c71bd1a11f0649323ea5b9a70b62a37938fa4", Pod:"coredns-787d4945fb-ttftx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8dbe614170", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:01:53.672392 env[1709]: 2024-02-09 19:01:53.610 [INFO][5486] k8s.go 578: Cleaning up netns ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Feb 9 19:01:53.672392 env[1709]: 2024-02-09 19:01:53.610 [INFO][5486] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" iface="eth0" netns="" Feb 9 19:01:53.672392 env[1709]: 2024-02-09 19:01:53.610 [INFO][5486] k8s.go 585: Releasing IP address(es) ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Feb 9 19:01:53.672392 env[1709]: 2024-02-09 19:01:53.610 [INFO][5486] utils.go 188: Calico CNI releasing IP address ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Feb 9 19:01:53.672392 env[1709]: 2024-02-09 19:01:53.654 [INFO][5492] ipam_plugin.go 415: Releasing address using handleID ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" HandleID="k8s-pod-network.642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:53.672392 env[1709]: 2024-02-09 19:01:53.654 [INFO][5492] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:01:53.672392 env[1709]: 2024-02-09 19:01:53.655 [INFO][5492] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:01:53.672392 env[1709]: 2024-02-09 19:01:53.666 [WARNING][5492] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" HandleID="k8s-pod-network.642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:53.672392 env[1709]: 2024-02-09 19:01:53.666 [INFO][5492] ipam_plugin.go 443: Releasing address using workloadID ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" HandleID="k8s-pod-network.642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Workload="ip--172--31--19--7-k8s-coredns--787d4945fb--ttftx-eth0" Feb 9 19:01:53.672392 env[1709]: 2024-02-09 19:01:53.668 [INFO][5492] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:01:53.672392 env[1709]: 2024-02-09 19:01:53.670 [INFO][5486] k8s.go 591: Teardown processing complete. ContainerID="642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608" Feb 9 19:01:53.673256 env[1709]: time="2024-02-09T19:01:53.672541985Z" level=info msg="TearDown network for sandbox \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\" successfully" Feb 9 19:01:53.678093 env[1709]: time="2024-02-09T19:01:53.677992530Z" level=info msg="RemovePodSandbox \"642d964bf7078f84c5f594947d0b1b728cd776f8367a69fad4204078859ca608\" returns successfully" Feb 9 19:01:54.051949 systemd-networkd[1512]: cali469ef3dc70d: Gained IPv6LL Feb 9 19:01:54.242893 systemd-networkd[1512]: cali56c38188811: Gained IPv6LL Feb 9 19:01:54.543619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1706121790.mount: Deactivated successfully. Feb 9 19:01:57.021682 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 19:01:57.022019 kernel: audit: type=1130 audit(1707505317.013:328): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.19.7:22-139.178.68.195:42810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:57.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.19.7:22-139.178.68.195:42810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:57.014066 systemd[1]: Started sshd@7-172.31.19.7:22-139.178.68.195:42810.service. Feb 9 19:01:57.292571 kernel: audit: type=1101 audit(1707505317.283:329): pid=5499 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:01:57.283000 audit[5499]: USER_ACCT pid=5499 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:01:57.294301 sshd[5499]: Accepted publickey for core from 139.178.68.195 port 42810 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:01:57.295000 audit[5499]: CRED_ACQ pid=5499 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:01:57.307862 kernel: audit: type=1103 audit(1707505317.295:330): pid=5499 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:01:57.307946 kernel: audit: type=1006 audit(1707505317.302:331): pid=5499 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Feb 9 19:01:57.315647 kernel: audit: type=1300 audit(1707505317.302:331): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd2c06cb50 a2=3 a3=0 items=0 ppid=1 pid=5499 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:57.302000 audit[5499]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd2c06cb50 a2=3 a3=0 items=0 ppid=1 pid=5499 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:57.302000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:01:57.319985 kernel: audit: type=1327 audit(1707505317.302:331): proctitle=737368643A20636F7265205B707269765D Feb 9 19:01:57.321624 sshd[5499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:01:57.339556 systemd[1]: Started session-8.scope. Feb 9 19:01:57.341141 systemd-logind[1698]: New session 8 of user core. Feb 9 19:01:57.358000 audit[5499]: USER_START pid=5499 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:01:57.369674 kernel: audit: type=1105 audit(1707505317.358:332): pid=5499 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:01:57.362000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:01:57.378776 kernel: audit: type=1103 audit(1707505317.362:333): pid=5504 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:01:57.469106 systemd[1]: run-containerd-runc-k8s.io-d1b341d8aceb126cadae0fc8420c502e061bd827544986cf6e326aa32a1a04a9-runc.kN1lxD.mount: Deactivated successfully. Feb 9 19:01:57.573833 env[1709]: time="2024-02-09T19:01:57.573730999Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:57.577654 env[1709]: time="2024-02-09T19:01:57.577617741Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:57.580840 env[1709]: time="2024-02-09T19:01:57.580800538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:57.582970 env[1709]: time="2024-02-09T19:01:57.582936159Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:57.584190 env[1709]: time="2024-02-09T19:01:57.584063833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 9 19:01:57.585732 env[1709]: time="2024-02-09T19:01:57.585691270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 19:01:57.591437 env[1709]: time="2024-02-09T19:01:57.591404500Z" level=info msg="CreateContainer within sandbox \"16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 19:01:57.620287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount24029517.mount: Deactivated successfully. Feb 9 19:01:57.628934 env[1709]: time="2024-02-09T19:01:57.628868905Z" level=info msg="CreateContainer within sandbox \"16d2cb017087eb605f04a6844e9e6171d80867b5a4198094d053a36fd1146bb8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f2a0fede62648f90d07e75b7785bea58e5e8993ca6189ca8ac5316ee84453ec1\"" Feb 9 19:01:57.632860 env[1709]: time="2024-02-09T19:01:57.632725320Z" level=info msg="StartContainer for \"f2a0fede62648f90d07e75b7785bea58e5e8993ca6189ca8ac5316ee84453ec1\"" Feb 9 19:01:57.793576 env[1709]: time="2024-02-09T19:01:57.792784610Z" level=info msg="StartContainer for \"f2a0fede62648f90d07e75b7785bea58e5e8993ca6189ca8ac5316ee84453ec1\" returns successfully" Feb 9 19:01:57.974996 env[1709]: time="2024-02-09T19:01:57.974886979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:57.979858 env[1709]: time="2024-02-09T19:01:57.979801855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:57.984798 env[1709]: time="2024-02-09T19:01:57.984753174Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:57.989793 env[1709]: time="2024-02-09T19:01:57.989743871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:57.999471 env[1709]: time="2024-02-09T19:01:57.993246227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 9 19:01:58.005353 env[1709]: time="2024-02-09T19:01:58.005312751Z" level=info msg="CreateContainer within sandbox \"109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 19:01:58.030211 env[1709]: time="2024-02-09T19:01:58.029318977Z" level=info msg="CreateContainer within sandbox \"109d29d64071f363bbe5f98d78b6b156ae75bf519069a6dcbcf888dedb5564c1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"60ff1f681498f286036df84ec5fda0a4dd600df7ac6f19052ea66b7afc0512da\"" Feb 9 19:01:58.031364 env[1709]: time="2024-02-09T19:01:58.031331465Z" level=info msg="StartContainer for \"60ff1f681498f286036df84ec5fda0a4dd600df7ac6f19052ea66b7afc0512da\"" Feb 9 19:01:58.117000 audit[5613]: NETFILTER_CFG table=filter:133 family=2 entries=8 op=nft_register_rule pid=5613 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:58.117000 audit[5613]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd778a1050 a2=0 a3=7ffd778a103c items=0 ppid=3099 pid=5613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:58.136607 kernel: audit: type=1325 audit(1707505318.117:334): table=filter:133 family=2 entries=8 op=nft_register_rule pid=5613 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:58.136737 kernel: audit: type=1300 audit(1707505318.117:334): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd778a1050 a2=0 a3=7ffd778a103c items=0 ppid=3099 pid=5613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:58.146941 sshd[5499]: pam_unix(sshd:session): session closed for user core Feb 9 19:01:58.117000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:58.142000 audit[5613]: NETFILTER_CFG table=nat:134 family=2 entries=78 op=nft_register_rule pid=5613 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:58.142000 audit[5613]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd778a1050 a2=0 a3=7ffd778a103c items=0 ppid=3099 pid=5613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:58.142000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:58.152796 systemd-logind[1698]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:01:58.148000 audit[5499]: USER_END pid=5499 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:01:58.148000 audit[5499]: CRED_DISP pid=5499 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:01:58.155443 systemd[1]: sshd@7-172.31.19.7:22-139.178.68.195:42810.service: Deactivated successfully. Feb 9 19:01:58.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.19.7:22-139.178.68.195:42810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:58.156682 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:01:58.159779 systemd-logind[1698]: Removed session 8. Feb 9 19:01:58.248785 env[1709]: time="2024-02-09T19:01:58.248662907Z" level=info msg="StartContainer for \"60ff1f681498f286036df84ec5fda0a4dd600df7ac6f19052ea66b7afc0512da\" returns successfully" Feb 9 19:01:58.847810 kubelet[2906]: I0209 19:01:58.847766 2906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59fbf65d7-mkmkb" podStartSLOduration=-9.22337202901056e+09 pod.CreationTimestamp="2024-02-09 19:01:51 +0000 UTC" firstStartedPulling="2024-02-09 19:01:52.961367496 +0000 UTC m=+61.100315332" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:01:57.851433525 +0000 UTC m=+65.990381381" watchObservedRunningTime="2024-02-09 19:01:58.844217242 +0000 UTC m=+66.983165101" Feb 9 19:01:58.849286 kubelet[2906]: I0209 19:01:58.849258 2906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59fbf65d7-hkxpp" podStartSLOduration=-9.223372029005568e+09 pod.CreationTimestamp="2024-02-09 19:01:51 +0000 UTC" firstStartedPulling="2024-02-09 19:01:52.987059078 +0000 UTC m=+61.126006928" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:01:58.842610605 +0000 UTC m=+66.981558466" watchObservedRunningTime="2024-02-09 19:01:58.849207718 +0000 UTC m=+66.988155577" Feb 9 19:01:58.970000 audit[5662]: NETFILTER_CFG table=filter:135 family=2 entries=8 op=nft_register_rule pid=5662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:58.970000 audit[5662]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc422628c0 a2=0 a3=7ffc422628ac items=0 ppid=3099 pid=5662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:58.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:01:58.977000 audit[5662]: NETFILTER_CFG table=nat:136 family=2 entries=78 op=nft_register_rule pid=5662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:01:58.977000 audit[5662]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffc422628c0 a2=0 a3=7ffc422628ac items=0 ppid=3099 pid=5662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:58.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:02:03.181282 kernel: kauditd_printk_skb: 13 callbacks suppressed Feb 9 19:02:03.181969 kernel: audit: type=1130 audit(1707505323.174:341): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.19.7:22-139.178.68.195:42826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:03.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.19.7:22-139.178.68.195:42826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:03.174285 systemd[1]: Started sshd@8-172.31.19.7:22-139.178.68.195:42826.service. Feb 9 19:02:03.387000 audit[5672]: USER_ACCT pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:03.391231 sshd[5672]: Accepted publickey for core from 139.178.68.195 port 42826 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:03.394000 audit[5672]: CRED_ACQ pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:03.407228 sshd[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:03.415099 kernel: audit: type=1101 audit(1707505323.387:342): pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:03.415214 kernel: audit: type=1103 audit(1707505323.394:343): pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:03.424957 kernel: audit: type=1006 audit(1707505323.395:344): pid=5672 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Feb 9 19:02:03.425078 kernel: audit: type=1300 audit(1707505323.395:344): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe97569140 a2=3 a3=0 items=0 ppid=1 pid=5672 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:03.395000 audit[5672]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe97569140 a2=3 a3=0 items=0 ppid=1 pid=5672 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:03.433769 systemd-logind[1698]: New session 9 of user core. Feb 9 19:02:03.436195 kernel: audit: type=1327 audit(1707505323.395:344): proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:03.395000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:03.435602 systemd[1]: Started session-9.scope. Feb 9 19:02:03.451000 audit[5672]: USER_START pid=5672 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:03.461111 kernel: audit: type=1105 audit(1707505323.451:345): pid=5672 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:03.462000 audit[5676]: CRED_ACQ pid=5676 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:03.476557 kernel: audit: type=1103 audit(1707505323.462:346): pid=5676 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:03.905390 sshd[5672]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:03.907000 audit[5672]: USER_END pid=5672 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:03.911000 audit[5672]: CRED_DISP pid=5672 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:03.922081 kernel: audit: type=1106 audit(1707505323.907:347): pid=5672 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:03.922293 kernel: audit: type=1104 audit(1707505323.911:348): pid=5672 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:03.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.19.7:22-139.178.68.195:42826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:03.925687 systemd[1]: sshd@8-172.31.19.7:22-139.178.68.195:42826.service: Deactivated successfully. Feb 9 19:02:03.928634 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:02:03.929825 systemd-logind[1698]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:02:03.932344 systemd-logind[1698]: Removed session 9. Feb 9 19:02:08.927956 systemd[1]: Started sshd@9-172.31.19.7:22-139.178.68.195:38238.service. Feb 9 19:02:08.930498 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:02:08.930593 kernel: audit: type=1130 audit(1707505328.928:350): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.19.7:22-139.178.68.195:38238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.19.7:22-139.178.68.195:38238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.120000 audit[5689]: USER_ACCT pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:09.128200 sshd[5689]: Accepted publickey for core from 139.178.68.195 port 38238 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:09.128793 kernel: audit: type=1101 audit(1707505329.120:351): pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:09.127000 audit[5689]: CRED_ACQ pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:09.129393 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:09.137642 systemd[1]: Started session-10.scope. Feb 9 19:02:09.138616 kernel: audit: type=1103 audit(1707505329.127:352): pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:09.140475 kernel: audit: type=1006 audit(1707505329.127:353): pid=5689 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 9 19:02:09.141737 kernel: audit: type=1300 audit(1707505329.127:353): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc54a0290 a2=3 a3=0 items=0 ppid=1 pid=5689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:09.127000 audit[5689]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc54a0290 a2=3 a3=0 items=0 ppid=1 pid=5689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:09.140912 systemd-logind[1698]: New session 10 of user core. Feb 9 19:02:09.161423 kernel: audit: type=1327 audit(1707505329.127:353): proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:09.127000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:09.158000 audit[5689]: USER_START pid=5689 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:09.186327 kernel: audit: type=1105 audit(1707505329.158:354): pid=5689 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:09.183000 audit[5692]: CRED_ACQ pid=5692 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:09.197172 kernel: audit: type=1103 audit(1707505329.183:355): pid=5692 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:09.475004 sshd[5689]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:09.477000 audit[5689]: USER_END pid=5689 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:09.481000 audit[5689]: CRED_DISP pid=5689 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:09.488667 systemd[1]: sshd@9-172.31.19.7:22-139.178.68.195:38238.service: Deactivated successfully. Feb 9 19:02:09.490771 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:02:09.493554 kernel: audit: type=1106 audit(1707505329.477:356): pid=5689 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:09.493689 kernel: audit: type=1104 audit(1707505329.481:357): pid=5689 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:09.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.19.7:22-139.178.68.195:38238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.494636 systemd-logind[1698]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:02:09.495814 systemd-logind[1698]: Removed session 10. Feb 9 19:02:14.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.19.7:22-139.178.68.195:38254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:14.505609 systemd[1]: Started sshd@10-172.31.19.7:22-139.178.68.195:38254.service. Feb 9 19:02:14.507628 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:02:14.507713 kernel: audit: type=1130 audit(1707505334.504:359): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.19.7:22-139.178.68.195:38254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:14.734000 audit[5727]: USER_ACCT pid=5727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:14.737611 sshd[5727]: Accepted publickey for core from 139.178.68.195 port 38254 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:14.749029 kernel: audit: type=1101 audit(1707505334.734:360): pid=5727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:14.749181 kernel: audit: type=1103 audit(1707505334.741:361): pid=5727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:14.741000 audit[5727]: CRED_ACQ pid=5727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:14.754852 kernel: audit: type=1006 audit(1707505334.741:362): pid=5727 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 9 19:02:14.765447 kernel: audit: type=1300 audit(1707505334.741:362): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeaafdba00 a2=3 a3=0 items=0 ppid=1 pid=5727 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:14.741000 audit[5727]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeaafdba00 a2=3 a3=0 items=0 ppid=1 pid=5727 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:14.741000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:14.761848 sshd[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:14.768529 kernel: audit: type=1327 audit(1707505334.741:362): proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:14.776406 systemd[1]: Started session-11.scope. Feb 9 19:02:14.776837 systemd-logind[1698]: New session 11 of user core. Feb 9 19:02:14.795055 kernel: audit: type=1105 audit(1707505334.784:363): pid=5727 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:14.784000 audit[5727]: USER_START pid=5727 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:14.786000 audit[5730]: CRED_ACQ pid=5730 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:14.801544 kernel: audit: type=1103 audit(1707505334.786:364): pid=5730 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:15.181181 sshd[5727]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:15.181000 audit[5727]: USER_END pid=5727 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:15.191586 kernel: audit: type=1106 audit(1707505335.181:365): pid=5727 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:15.186559 systemd-logind[1698]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:02:15.188270 systemd[1]: sshd@10-172.31.19.7:22-139.178.68.195:38254.service: Deactivated successfully. Feb 9 19:02:15.189471 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:02:15.191034 systemd-logind[1698]: Removed session 11. Feb 9 19:02:15.181000 audit[5727]: CRED_DISP pid=5727 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:15.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.19.7:22-139.178.68.195:38254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:15.201689 kernel: audit: type=1104 audit(1707505335.181:366): pid=5727 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:17.364855 amazon-ssm-agent[1774]: 2024-02-09 19:02:17 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 19:02:20.210786 systemd[1]: Started sshd@11-172.31.19.7:22-139.178.68.195:54920.service. Feb 9 19:02:20.218857 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:02:20.218995 kernel: audit: type=1130 audit(1707505340.210:368): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.19.7:22-139.178.68.195:54920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:20.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.19.7:22-139.178.68.195:54920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:20.412000 audit[5746]: USER_ACCT pid=5746 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:20.421047 kernel: audit: type=1101 audit(1707505340.412:369): pid=5746 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:20.421117 sshd[5746]: Accepted publickey for core from 139.178.68.195 port 54920 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:20.418000 audit[5746]: CRED_ACQ pid=5746 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:20.423903 sshd[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:20.433035 kernel: audit: type=1103 audit(1707505340.418:370): pid=5746 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:20.433158 kernel: audit: type=1006 audit(1707505340.420:371): pid=5746 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Feb 9 19:02:20.435621 kernel: audit: type=1300 audit(1707505340.420:371): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecf871640 a2=3 a3=0 items=0 ppid=1 pid=5746 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:20.420000 audit[5746]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecf871640 a2=3 a3=0 items=0 ppid=1 pid=5746 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:20.437821 systemd[1]: Started session-12.scope. Feb 9 19:02:20.438930 systemd-logind[1698]: New session 12 of user core. Feb 9 19:02:20.420000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:20.444654 kernel: audit: type=1327 audit(1707505340.420:371): proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:20.448000 audit[5746]: USER_START pid=5746 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:20.449000 audit[5749]: CRED_ACQ pid=5749 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:20.458542 kernel: audit: type=1105 audit(1707505340.448:372): pid=5746 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:20.458606 kernel: audit: type=1103 audit(1707505340.449:373): pid=5749 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:20.810130 sshd[5746]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:20.812000 audit[5746]: USER_END pid=5746 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:20.816244 systemd[1]: sshd@11-172.31.19.7:22-139.178.68.195:54920.service: Deactivated successfully. Feb 9 19:02:20.817474 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:02:20.821715 kernel: audit: type=1106 audit(1707505340.812:374): pid=5746 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:20.821017 systemd-logind[1698]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:02:20.822764 systemd-logind[1698]: Removed session 12. Feb 9 19:02:20.812000 audit[5746]: CRED_DISP pid=5746 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:20.831527 kernel: audit: type=1104 audit(1707505340.812:375): pid=5746 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:20.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.19.7:22-139.178.68.195:54920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:20.836980 systemd[1]: Started sshd@12-172.31.19.7:22-139.178.68.195:54926.service. Feb 9 19:02:20.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.19.7:22-139.178.68.195:54926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:21.011000 audit[5761]: USER_ACCT pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:21.013631 sshd[5761]: Accepted publickey for core from 139.178.68.195 port 54926 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:21.013000 audit[5761]: CRED_ACQ pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:21.013000 audit[5761]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec1419430 a2=3 a3=0 items=0 ppid=1 pid=5761 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:21.013000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:21.015912 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:21.022922 systemd[1]: Started session-13.scope. Feb 9 19:02:21.023696 systemd-logind[1698]: New session 13 of user core. Feb 9 19:02:21.030000 audit[5761]: USER_START pid=5761 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:21.032000 audit[5764]: CRED_ACQ pid=5764 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:22.181399 systemd[1]: run-containerd-runc-k8s.io-f2a0fede62648f90d07e75b7785bea58e5e8993ca6189ca8ac5316ee84453ec1-runc.BuvU7e.mount: Deactivated successfully. Feb 9 19:02:22.248060 systemd[1]: run-containerd-runc-k8s.io-60ff1f681498f286036df84ec5fda0a4dd600df7ac6f19052ea66b7afc0512da-runc.H86gPN.mount: Deactivated successfully. Feb 9 19:02:22.555000 audit[5834]: NETFILTER_CFG table=filter:137 family=2 entries=7 op=nft_register_rule pid=5834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:02:22.555000 audit[5834]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffdf6472870 a2=0 a3=7ffdf647285c items=0 ppid=3099 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:22.555000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:02:22.558000 audit[5834]: NETFILTER_CFG table=nat:138 family=2 entries=85 op=nft_register_chain pid=5834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:02:22.558000 audit[5834]: SYSCALL arch=c000003e syscall=46 success=yes exit=28484 a0=3 a1=7ffdf6472870 a2=0 a3=7ffdf647285c items=0 ppid=3099 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:22.558000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:02:22.621000 audit[5860]: NETFILTER_CFG table=filter:139 family=2 entries=6 op=nft_register_rule pid=5860 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:02:22.621000 audit[5860]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe0c5a6f60 a2=0 a3=7ffe0c5a6f4c items=0 ppid=3099 pid=5860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:22.621000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:02:22.627000 audit[5860]: NETFILTER_CFG table=nat:140 family=2 entries=92 op=nft_register_chain pid=5860 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:02:22.627000 audit[5860]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffe0c5a6f60 a2=0 a3=7ffe0c5a6f4c items=0 ppid=3099 pid=5860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:22.627000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:02:23.381894 sshd[5761]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:23.386000 audit[5761]: USER_END pid=5761 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:23.386000 audit[5761]: CRED_DISP pid=5761 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:23.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.19.7:22-139.178.68.195:54928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.398956 systemd[1]: Started sshd@13-172.31.19.7:22-139.178.68.195:54928.service. Feb 9 19:02:23.409098 systemd[1]: sshd@12-172.31.19.7:22-139.178.68.195:54926.service: Deactivated successfully. Feb 9 19:02:23.410423 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:02:23.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.19.7:22-139.178.68.195:54926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.413383 systemd-logind[1698]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:02:23.418209 systemd-logind[1698]: Removed session 13. Feb 9 19:02:23.606345 sshd[5867]: Accepted publickey for core from 139.178.68.195 port 54928 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:23.604000 audit[5867]: USER_ACCT pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:23.606000 audit[5867]: CRED_ACQ pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:23.606000 audit[5867]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0c3d8d00 a2=3 a3=0 items=0 ppid=1 pid=5867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:23.606000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:23.609140 sshd[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:23.617052 systemd[1]: Started session-14.scope. Feb 9 19:02:23.619111 systemd-logind[1698]: New session 14 of user core. Feb 9 19:02:23.631000 audit[5867]: USER_START pid=5867 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:23.633000 audit[5873]: CRED_ACQ pid=5873 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:24.001032 sshd[5867]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:24.001000 audit[5867]: USER_END pid=5867 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:24.001000 audit[5867]: CRED_DISP pid=5867 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:24.006397 systemd-logind[1698]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:02:24.007209 systemd[1]: sshd@13-172.31.19.7:22-139.178.68.195:54928.service: Deactivated successfully. Feb 9 19:02:24.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.19.7:22-139.178.68.195:54928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:24.008893 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:02:24.010173 systemd-logind[1698]: Removed session 14. Feb 9 19:02:27.404728 systemd[1]: run-containerd-runc-k8s.io-d1b341d8aceb126cadae0fc8420c502e061bd827544986cf6e326aa32a1a04a9-runc.1Pom0n.mount: Deactivated successfully. Feb 9 19:02:29.032843 systemd[1]: Started sshd@14-172.31.19.7:22-139.178.68.195:39796.service. Feb 9 19:02:29.041390 kernel: kauditd_printk_skb: 35 callbacks suppressed Feb 9 19:02:29.041573 kernel: audit: type=1130 audit(1707505349.031:399): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.19.7:22-139.178.68.195:39796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:29.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.19.7:22-139.178.68.195:39796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:29.255146 sshd[5905]: Accepted publickey for core from 139.178.68.195 port 39796 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:29.251000 audit[5905]: USER_ACCT pid=5905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:29.281856 kernel: audit: type=1101 audit(1707505349.251:400): pid=5905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:29.282024 kernel: audit: type=1103 audit(1707505349.265:401): pid=5905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:29.265000 audit[5905]: CRED_ACQ pid=5905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:29.282445 sshd[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:29.294616 kernel: audit: type=1006 audit(1707505349.265:402): pid=5905 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Feb 9 19:02:29.307597 kernel: audit: type=1300 audit(1707505349.265:402): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb7c45e00 a2=3 a3=0 items=0 ppid=1 pid=5905 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:29.265000 audit[5905]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb7c45e00 a2=3 a3=0 items=0 ppid=1 pid=5905 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:29.303301 systemd[1]: Started session-15.scope. Feb 9 19:02:29.305967 systemd-logind[1698]: New session 15 of user core. Feb 9 19:02:29.265000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:29.340413 kernel: audit: type=1327 audit(1707505349.265:402): proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:29.331000 audit[5905]: USER_START pid=5905 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:29.349765 kernel: audit: type=1105 audit(1707505349.331:403): pid=5905 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:29.341000 audit[5910]: CRED_ACQ pid=5910 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:29.357620 kernel: audit: type=1103 audit(1707505349.341:404): pid=5910 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:29.572085 sshd[5905]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:29.573000 audit[5905]: USER_END pid=5905 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:29.578786 systemd-logind[1698]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:02:29.584810 systemd[1]: sshd@14-172.31.19.7:22-139.178.68.195:39796.service: Deactivated successfully. Feb 9 19:02:29.586845 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:02:29.599026 kernel: audit: type=1106 audit(1707505349.573:405): pid=5905 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:29.599170 kernel: audit: type=1104 audit(1707505349.573:406): pid=5905 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:29.573000 audit[5905]: CRED_DISP pid=5905 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:29.590389 systemd-logind[1698]: Removed session 15. Feb 9 19:02:29.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.19.7:22-139.178.68.195:39796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:34.605892 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:02:34.606021 kernel: audit: type=1130 audit(1707505354.596:408): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.19.7:22-139.178.68.195:39806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:34.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.19.7:22-139.178.68.195:39806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:34.597894 systemd[1]: Started sshd@15-172.31.19.7:22-139.178.68.195:39806.service. Feb 9 19:02:34.772045 sshd[5924]: Accepted publickey for core from 139.178.68.195 port 39806 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:34.770000 audit[5924]: USER_ACCT pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:34.790545 kernel: audit: type=1101 audit(1707505354.770:409): pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:34.792652 sshd[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:34.789000 audit[5924]: CRED_ACQ pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:34.802809 kernel: audit: type=1103 audit(1707505354.789:410): pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:34.809532 kernel: audit: type=1006 audit(1707505354.789:411): pid=5924 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Feb 9 19:02:34.810163 systemd[1]: Started session-16.scope. Feb 9 19:02:34.789000 audit[5924]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffcc74e340 a2=3 a3=0 items=0 ppid=1 pid=5924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:34.811771 systemd-logind[1698]: New session 16 of user core. Feb 9 19:02:34.820809 kernel: audit: type=1300 audit(1707505354.789:411): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffcc74e340 a2=3 a3=0 items=0 ppid=1 pid=5924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:34.789000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:34.823729 kernel: audit: type=1327 audit(1707505354.789:411): proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:34.825000 audit[5924]: USER_START pid=5924 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:34.836555 kernel: audit: type=1105 audit(1707505354.825:412): pid=5924 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:34.837232 kernel: audit: type=1103 audit(1707505354.832:413): pid=5927 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:34.832000 audit[5927]: CRED_ACQ pid=5927 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:35.038704 sshd[5924]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:35.039000 audit[5924]: USER_END pid=5924 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:35.043185 systemd-logind[1698]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:02:35.045487 systemd[1]: sshd@15-172.31.19.7:22-139.178.68.195:39806.service: Deactivated successfully. Feb 9 19:02:35.046907 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:02:35.048532 kernel: audit: type=1106 audit(1707505355.039:414): pid=5924 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:35.055681 kernel: audit: type=1104 audit(1707505355.039:415): pid=5924 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:35.039000 audit[5924]: CRED_DISP pid=5924 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:35.049504 systemd-logind[1698]: Removed session 16. Feb 9 19:02:35.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.19.7:22-139.178.68.195:39806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:38.517084 update_engine[1701]: I0209 19:02:38.515533 1701 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 19:02:38.517657 update_engine[1701]: I0209 19:02:38.517105 1701 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 19:02:38.522353 update_engine[1701]: I0209 19:02:38.522320 1701 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 19:02:38.523750 update_engine[1701]: I0209 19:02:38.523444 1701 omaha_request_params.cc:62] Current group set to lts Feb 9 19:02:38.529655 update_engine[1701]: I0209 19:02:38.529486 1701 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 19:02:38.529655 update_engine[1701]: I0209 19:02:38.529531 1701 update_attempter.cc:643] Scheduling an action processor start. Feb 9 19:02:38.529655 update_engine[1701]: I0209 19:02:38.529554 1701 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 19:02:38.532120 update_engine[1701]: I0209 19:02:38.532088 1701 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 19:02:38.532288 update_engine[1701]: I0209 19:02:38.532249 1701 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 19:02:38.532288 update_engine[1701]: I0209 19:02:38.532259 1701 omaha_request_action.cc:271] Request: Feb 9 19:02:38.532288 update_engine[1701]: Feb 9 19:02:38.532288 update_engine[1701]: Feb 9 19:02:38.532288 update_engine[1701]: Feb 9 19:02:38.532288 update_engine[1701]: Feb 9 19:02:38.532288 update_engine[1701]: Feb 9 19:02:38.532288 update_engine[1701]: Feb 9 19:02:38.532288 update_engine[1701]: Feb 9 19:02:38.532288 update_engine[1701]: Feb 9 19:02:38.532288 update_engine[1701]: I0209 19:02:38.532266 1701 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:02:38.548072 update_engine[1701]: I0209 19:02:38.546887 1701 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:02:38.548813 update_engine[1701]: I0209 19:02:38.548782 1701 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:02:38.556376 update_engine[1701]: E0209 19:02:38.556340 1701 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:02:38.556543 update_engine[1701]: I0209 19:02:38.556463 1701 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 19:02:38.583199 locksmithd[1768]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 19:02:40.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.19.7:22-139.178.68.195:46112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:40.067124 systemd[1]: Started sshd@16-172.31.19.7:22-139.178.68.195:46112.service. Feb 9 19:02:40.072897 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:02:40.072969 kernel: audit: type=1130 audit(1707505360.067:417): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.19.7:22-139.178.68.195:46112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:40.237000 audit[5939]: USER_ACCT pid=5939 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:40.238584 sshd[5939]: Accepted publickey for core from 139.178.68.195 port 46112 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:40.244580 kernel: audit: type=1101 audit(1707505360.237:418): pid=5939 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:40.244000 audit[5939]: CRED_ACQ pid=5939 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:40.245355 sshd[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:40.253054 systemd[1]: Started session-17.scope. Feb 9 19:02:40.254484 kernel: audit: type=1103 audit(1707505360.244:419): pid=5939 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:40.254574 kernel: audit: type=1006 audit(1707505360.244:420): pid=5939 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 9 19:02:40.254671 systemd-logind[1698]: New session 17 of user core. Feb 9 19:02:40.244000 audit[5939]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1e1afd30 a2=3 a3=0 items=0 ppid=1 pid=5939 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:40.261530 kernel: audit: type=1300 audit(1707505360.244:420): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1e1afd30 a2=3 a3=0 items=0 ppid=1 pid=5939 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:40.244000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:40.264775 kernel: audit: type=1327 audit(1707505360.244:420): proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:40.264000 audit[5939]: USER_START pid=5939 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:40.267000 audit[5942]: CRED_ACQ pid=5942 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:40.277942 kernel: audit: type=1105 audit(1707505360.264:421): pid=5939 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:40.278055 kernel: audit: type=1103 audit(1707505360.267:422): pid=5942 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:40.471020 sshd[5939]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:40.471000 audit[5939]: USER_END pid=5939 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:40.474622 systemd-logind[1698]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:02:40.476271 systemd[1]: sshd@16-172.31.19.7:22-139.178.68.195:46112.service: Deactivated successfully. Feb 9 19:02:40.480646 kernel: audit: type=1106 audit(1707505360.471:423): pid=5939 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:40.480795 kernel: audit: type=1104 audit(1707505360.472:424): pid=5939 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:40.472000 audit[5939]: CRED_DISP pid=5939 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:40.477449 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:02:40.479148 systemd-logind[1698]: Removed session 17. Feb 9 19:02:40.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.19.7:22-139.178.68.195:46112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:43.081106 systemd[1]: run-containerd-runc-k8s.io-7b68f7f0947b9dc802baf5f137268570e843849a31d7324725cf8b39423685e9-runc.ufaZaJ.mount: Deactivated successfully. Feb 9 19:02:45.496224 systemd[1]: Started sshd@17-172.31.19.7:22-139.178.68.195:46124.service. Feb 9 19:02:45.504852 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:02:45.504997 kernel: audit: type=1130 audit(1707505365.496:426): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.19.7:22-139.178.68.195:46124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.19.7:22-139.178.68.195:46124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.697000 audit[5973]: USER_ACCT pid=5973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:45.706198 sshd[5973]: Accepted publickey for core from 139.178.68.195 port 46124 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:45.706735 kernel: audit: type=1101 audit(1707505365.697:427): pid=5973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:45.713739 kernel: audit: type=1103 audit(1707505365.699:428): pid=5973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:45.699000 audit[5973]: CRED_ACQ pid=5973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:45.707622 sshd[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:45.721083 kernel: audit: type=1006 audit(1707505365.699:429): pid=5973 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Feb 9 19:02:45.699000 audit[5973]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5aa08130 a2=3 a3=0 items=0 ppid=1 pid=5973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:45.731539 kernel: audit: type=1300 audit(1707505365.699:429): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5aa08130 a2=3 a3=0 items=0 ppid=1 pid=5973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:45.736742 systemd-logind[1698]: New session 18 of user core. Feb 9 19:02:45.739096 systemd[1]: Started session-18.scope. Feb 9 19:02:45.699000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:45.755601 kernel: audit: type=1327 audit(1707505365.699:429): proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:45.748000 audit[5973]: USER_START pid=5973 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:45.775992 kernel: audit: type=1105 audit(1707505365.748:430): pid=5973 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:45.755000 audit[5976]: CRED_ACQ pid=5976 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:45.798539 kernel: audit: type=1103 audit(1707505365.755:431): pid=5976 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:46.051948 sshd[5973]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:46.053000 audit[5973]: USER_END pid=5973 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:46.061897 systemd[1]: sshd@17-172.31.19.7:22-139.178.68.195:46124.service: Deactivated successfully. Feb 9 19:02:46.062540 kernel: audit: type=1106 audit(1707505366.053:432): pid=5973 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:46.063129 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:02:46.064486 systemd-logind[1698]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:02:46.065609 systemd-logind[1698]: Removed session 18. Feb 9 19:02:46.053000 audit[5973]: CRED_DISP pid=5973 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:46.075660 kernel: audit: type=1104 audit(1707505366.053:433): pid=5973 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:46.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.19.7:22-139.178.68.195:46124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:46.079834 systemd[1]: Started sshd@18-172.31.19.7:22-139.178.68.195:48502.service. Feb 9 19:02:46.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.19.7:22-139.178.68.195:48502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:46.240000 audit[5986]: USER_ACCT pid=5986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:46.241284 sshd[5986]: Accepted publickey for core from 139.178.68.195 port 48502 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:46.241000 audit[5986]: CRED_ACQ pid=5986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:46.242000 audit[5986]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff65713110 a2=3 a3=0 items=0 ppid=1 pid=5986 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:46.242000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:46.242950 sshd[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:46.248436 systemd-logind[1698]: New session 19 of user core. Feb 9 19:02:46.248555 systemd[1]: Started session-19.scope. Feb 9 19:02:46.256000 audit[5986]: USER_START pid=5986 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:46.258000 audit[5989]: CRED_ACQ pid=5989 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:47.141761 sshd[5986]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:47.142000 audit[5986]: USER_END pid=5986 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:47.143000 audit[5986]: CRED_DISP pid=5986 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:47.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.19.7:22-139.178.68.195:48502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:47.145619 systemd[1]: sshd@18-172.31.19.7:22-139.178.68.195:48502.service: Deactivated successfully. Feb 9 19:02:47.148661 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:02:47.150599 systemd-logind[1698]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:02:47.153557 systemd-logind[1698]: Removed session 19. Feb 9 19:02:47.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.19.7:22-139.178.68.195:48514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:47.164858 systemd[1]: Started sshd@19-172.31.19.7:22-139.178.68.195:48514.service. Feb 9 19:02:47.363190 sshd[5997]: Accepted publickey for core from 139.178.68.195 port 48514 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:47.362000 audit[5997]: USER_ACCT pid=5997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:47.366000 audit[5997]: CRED_ACQ pid=5997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:47.366000 audit[5997]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe00add320 a2=3 a3=0 items=0 ppid=1 pid=5997 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:47.366000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:47.368737 sshd[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:47.379348 systemd[1]: Started session-20.scope. Feb 9 19:02:47.379830 systemd-logind[1698]: New session 20 of user core. Feb 9 19:02:47.401000 audit[5997]: USER_START pid=5997 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:47.408000 audit[6000]: CRED_ACQ pid=6000 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:48.495284 update_engine[1701]: I0209 19:02:48.494560 1701 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:02:48.495284 update_engine[1701]: I0209 19:02:48.495033 1701 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:02:48.495284 update_engine[1701]: I0209 19:02:48.495244 1701 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:02:48.497544 update_engine[1701]: E0209 19:02:48.497308 1701 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:02:48.497544 update_engine[1701]: I0209 19:02:48.497488 1701 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 19:02:49.411674 systemd[1]: run-containerd-runc-k8s.io-d1b341d8aceb126cadae0fc8420c502e061bd827544986cf6e326aa32a1a04a9-runc.z0xrhR.mount: Deactivated successfully. Feb 9 19:02:51.043877 sshd[5997]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:51.060319 kernel: kauditd_printk_skb: 20 callbacks suppressed Feb 9 19:02:51.060482 kernel: audit: type=1106 audit(1707505371.051:450): pid=5997 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:51.051000 audit[5997]: USER_END pid=5997 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:51.059282 systemd[1]: Started sshd@20-172.31.19.7:22-139.178.68.195:48522.service. Feb 9 19:02:51.080272 systemd[1]: sshd@19-172.31.19.7:22-139.178.68.195:48514.service: Deactivated successfully. Feb 9 19:02:51.084794 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:02:51.085709 systemd-logind[1698]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:02:51.097621 kernel: audit: type=1130 audit(1707505371.058:451): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.19.7:22-139.178.68.195:48522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:51.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.19.7:22-139.178.68.195:48522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:51.099964 systemd-logind[1698]: Removed session 20. Feb 9 19:02:51.062000 audit[5997]: CRED_DISP pid=5997 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:51.114611 kernel: audit: type=1104 audit(1707505371.062:452): pid=5997 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:51.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.19.7:22-139.178.68.195:48514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:51.135533 kernel: audit: type=1131 audit(1707505371.080:453): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.19.7:22-139.178.68.195:48514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:51.145000 audit[6061]: NETFILTER_CFG table=filter:141 family=2 entries=18 op=nft_register_rule pid=6061 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:02:51.145000 audit[6061]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffe32fee580 a2=0 a3=7ffe32fee56c items=0 ppid=3099 pid=6061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:51.157062 kernel: audit: type=1325 audit(1707505371.145:454): table=filter:141 family=2 entries=18 op=nft_register_rule pid=6061 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:02:51.157218 kernel: audit: type=1300 audit(1707505371.145:454): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffe32fee580 a2=0 a3=7ffe32fee56c items=0 ppid=3099 pid=6061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:51.145000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:02:51.145000 audit[6061]: NETFILTER_CFG table=nat:142 family=2 entries=94 op=nft_register_rule pid=6061 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:02:51.171549 kernel: audit: type=1327 audit(1707505371.145:454): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:02:51.171672 kernel: audit: type=1325 audit(1707505371.145:455): table=nat:142 family=2 entries=94 op=nft_register_rule pid=6061 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:02:51.171701 kernel: audit: type=1300 audit(1707505371.145:455): arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffe32fee580 a2=0 a3=7ffe32fee56c items=0 ppid=3099 pid=6061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:51.145000 audit[6061]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffe32fee580 a2=0 a3=7ffe32fee56c items=0 ppid=3099 pid=6061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:51.145000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:02:51.182538 kernel: audit: type=1327 audit(1707505371.145:455): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:02:51.225000 audit[6087]: NETFILTER_CFG table=filter:143 family=2 entries=30 op=nft_register_rule pid=6087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:02:51.225000 audit[6087]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffecac78310 a2=0 a3=7ffecac782fc items=0 ppid=3099 pid=6087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:51.225000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:02:51.228000 audit[6087]: NETFILTER_CFG table=nat:144 family=2 entries=94 op=nft_register_rule pid=6087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:02:51.228000 audit[6087]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffecac78310 a2=0 a3=7ffecac782fc items=0 ppid=3099 pid=6087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:51.228000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:02:51.264000 audit[6057]: USER_ACCT pid=6057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:51.267071 sshd[6057]: Accepted publickey for core from 139.178.68.195 port 48522 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:51.267000 audit[6057]: CRED_ACQ pid=6057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:51.267000 audit[6057]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd570b1df0 a2=3 a3=0 items=0 ppid=1 pid=6057 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:51.267000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:51.268840 sshd[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:51.275327 systemd[1]: Started session-21.scope. Feb 9 19:02:51.275786 systemd-logind[1698]: New session 21 of user core. Feb 9 19:02:51.283000 audit[6057]: USER_START pid=6057 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:51.285000 audit[6089]: CRED_ACQ pid=6089 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:52.169727 systemd[1]: run-containerd-runc-k8s.io-f2a0fede62648f90d07e75b7785bea58e5e8993ca6189ca8ac5316ee84453ec1-runc.Kw1cj4.mount: Deactivated successfully. Feb 9 19:02:52.478728 sshd[6057]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:52.488000 audit[6057]: USER_END pid=6057 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:52.489000 audit[6057]: CRED_DISP pid=6057 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:52.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.19.7:22-139.178.68.195:48522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:52.494385 systemd[1]: sshd@20-172.31.19.7:22-139.178.68.195:48522.service: Deactivated successfully. Feb 9 19:02:52.496150 systemd-logind[1698]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:02:52.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.19.7:22-139.178.68.195:48530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:52.499167 systemd[1]: Started sshd@21-172.31.19.7:22-139.178.68.195:48530.service. Feb 9 19:02:52.499948 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:02:52.502331 systemd-logind[1698]: Removed session 21. Feb 9 19:02:52.710000 audit[6136]: USER_ACCT pid=6136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:52.711598 sshd[6136]: Accepted publickey for core from 139.178.68.195 port 48530 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:52.712000 audit[6136]: CRED_ACQ pid=6136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:52.712000 audit[6136]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd60b94300 a2=3 a3=0 items=0 ppid=1 pid=6136 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:52.712000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:52.714993 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:52.721587 systemd-logind[1698]: New session 22 of user core. Feb 9 19:02:52.722201 systemd[1]: Started session-22.scope. Feb 9 19:02:52.731000 audit[6136]: USER_START pid=6136 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:52.733000 audit[6139]: CRED_ACQ pid=6139 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:52.991813 sshd[6136]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:52.993000 audit[6136]: USER_END pid=6136 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:52.994000 audit[6136]: CRED_DISP pid=6136 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:53.000033 systemd[1]: sshd@21-172.31.19.7:22-139.178.68.195:48530.service: Deactivated successfully. Feb 9 19:02:52.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.19.7:22-139.178.68.195:48530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:53.002209 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:02:53.003832 systemd-logind[1698]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:02:53.009955 systemd-logind[1698]: Removed session 22. Feb 9 19:02:58.027969 kernel: kauditd_printk_skb: 27 callbacks suppressed Feb 9 19:02:58.028182 kernel: audit: type=1130 audit(1707505378.015:475): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.19.7:22-139.178.68.195:60974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:58.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.19.7:22-139.178.68.195:60974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:58.015997 systemd[1]: Started sshd@22-172.31.19.7:22-139.178.68.195:60974.service. Feb 9 19:02:58.225729 kernel: audit: type=1101 audit(1707505378.215:476): pid=6170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:58.215000 audit[6170]: USER_ACCT pid=6170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:58.226196 sshd[6170]: Accepted publickey for core from 139.178.68.195 port 60974 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:58.228000 audit[6170]: CRED_ACQ pid=6170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:58.230657 sshd[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:58.241749 kernel: audit: type=1103 audit(1707505378.228:477): pid=6170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:58.247634 kernel: audit: type=1006 audit(1707505378.228:478): pid=6170 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Feb 9 19:02:58.228000 audit[6170]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc4eb22f10 a2=3 a3=0 items=0 ppid=1 pid=6170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:58.256556 kernel: audit: type=1300 audit(1707505378.228:478): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc4eb22f10 a2=3 a3=0 items=0 ppid=1 pid=6170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:58.254093 systemd[1]: Started session-23.scope. Feb 9 19:02:58.256152 systemd-logind[1698]: New session 23 of user core. Feb 9 19:02:58.228000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:58.265614 kernel: audit: type=1327 audit(1707505378.228:478): proctitle=737368643A20636F7265205B707269765D Feb 9 19:02:58.266000 audit[6170]: USER_START pid=6170 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:58.268000 audit[6173]: CRED_ACQ pid=6173 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:58.285129 kernel: audit: type=1105 audit(1707505378.266:479): pid=6170 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:58.285260 kernel: audit: type=1103 audit(1707505378.268:480): pid=6173 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:58.494446 update_engine[1701]: I0209 19:02:58.494330 1701 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:02:58.496030 update_engine[1701]: I0209 19:02:58.494719 1701 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:02:58.496030 update_engine[1701]: I0209 19:02:58.495968 1701 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:02:58.496300 update_engine[1701]: E0209 19:02:58.496276 1701 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:02:58.496422 update_engine[1701]: I0209 19:02:58.496400 1701 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 19:02:58.497540 sshd[6170]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:58.499000 audit[6170]: USER_END pid=6170 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:58.509556 kernel: audit: type=1106 audit(1707505378.499:481): pid=6170 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:58.508000 audit[6170]: CRED_DISP pid=6170 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:58.514024 systemd[1]: sshd@22-172.31.19.7:22-139.178.68.195:60974.service: Deactivated successfully. Feb 9 19:02:58.515907 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:02:58.517618 kernel: audit: type=1104 audit(1707505378.508:482): pid=6170 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:02:58.518413 systemd-logind[1698]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:02:58.519479 systemd-logind[1698]: Removed session 23. Feb 9 19:02:58.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.19.7:22-139.178.68.195:60974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:59.461000 audit[6208]: NETFILTER_CFG table=filter:145 family=2 entries=18 op=nft_register_rule pid=6208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:02:59.461000 audit[6208]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe840d60b0 a2=0 a3=7ffe840d609c items=0 ppid=3099 pid=6208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:59.461000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:02:59.466000 audit[6208]: NETFILTER_CFG table=nat:146 family=2 entries=178 op=nft_register_chain pid=6208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:02:59.466000 audit[6208]: SYSCALL arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7ffe840d60b0 a2=0 a3=7ffe840d609c items=0 ppid=3099 pid=6208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:59.466000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:03:03.538437 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 9 19:03:03.540692 kernel: audit: type=1130 audit(1707505383.528:486): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.19.7:22-139.178.68.195:60990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:03.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.19.7:22-139.178.68.195:60990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:03.528680 systemd[1]: Started sshd@23-172.31.19.7:22-139.178.68.195:60990.service. Feb 9 19:03:03.722998 kernel: audit: type=1101 audit(1707505383.712:487): pid=6216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:03.712000 audit[6216]: USER_ACCT pid=6216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:03.723802 sshd[6216]: Accepted publickey for core from 139.178.68.195 port 60990 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:03.725000 audit[6216]: CRED_ACQ pid=6216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:03.726590 sshd[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:03.739576 systemd[1]: Started session-24.scope. Feb 9 19:03:03.741635 kernel: audit: type=1103 audit(1707505383.725:488): pid=6216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:03.741712 kernel: audit: type=1006 audit(1707505383.725:489): pid=6216 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Feb 9 19:03:03.744212 systemd-logind[1698]: New session 24 of user core. Feb 9 19:03:03.725000 audit[6216]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6dc80310 a2=3 a3=0 items=0 ppid=1 pid=6216 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:03.753612 kernel: audit: type=1300 audit(1707505383.725:489): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6dc80310 a2=3 a3=0 items=0 ppid=1 pid=6216 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:03.725000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:03:03.768578 kernel: audit: type=1327 audit(1707505383.725:489): proctitle=737368643A20636F7265205B707269765D Feb 9 19:03:03.771000 audit[6216]: USER_START pid=6216 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:03.777000 audit[6219]: CRED_ACQ pid=6219 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:03.792061 kernel: audit: type=1105 audit(1707505383.771:490): pid=6216 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:03.792347 kernel: audit: type=1103 audit(1707505383.777:491): pid=6219 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:04.082262 sshd[6216]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:04.087000 audit[6216]: USER_END pid=6216 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:04.098561 kernel: audit: type=1106 audit(1707505384.087:492): pid=6216 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:04.107088 kernel: audit: type=1104 audit(1707505384.089:493): pid=6216 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:04.089000 audit[6216]: CRED_DISP pid=6216 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:04.101345 systemd-logind[1698]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:03:04.104299 systemd[1]: sshd@23-172.31.19.7:22-139.178.68.195:60990.service: Deactivated successfully. Feb 9 19:03:04.106372 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:03:04.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.19.7:22-139.178.68.195:60990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:04.108615 systemd-logind[1698]: Removed session 24. Feb 9 19:03:08.498613 update_engine[1701]: I0209 19:03:08.498564 1701 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:03:08.499230 update_engine[1701]: I0209 19:03:08.498896 1701 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:03:08.499230 update_engine[1701]: I0209 19:03:08.499219 1701 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:03:08.499994 update_engine[1701]: E0209 19:03:08.499971 1701 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:03:08.500114 update_engine[1701]: I0209 19:03:08.500067 1701 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 19:03:08.500114 update_engine[1701]: I0209 19:03:08.500077 1701 omaha_request_action.cc:621] Omaha request response: Feb 9 19:03:08.502698 update_engine[1701]: E0209 19:03:08.500167 1701 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 19:03:08.502698 update_engine[1701]: I0209 19:03:08.500185 1701 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 19:03:08.502698 update_engine[1701]: I0209 19:03:08.500190 1701 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 19:03:08.502698 update_engine[1701]: I0209 19:03:08.500195 1701 update_attempter.cc:306] Processing Done. Feb 9 19:03:08.502698 update_engine[1701]: E0209 19:03:08.500212 1701 update_attempter.cc:619] Update failed. Feb 9 19:03:08.502698 update_engine[1701]: I0209 19:03:08.500216 1701 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 19:03:08.502698 update_engine[1701]: I0209 19:03:08.500221 1701 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 19:03:08.502698 update_engine[1701]: I0209 19:03:08.500224 1701 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 19:03:08.502698 update_engine[1701]: I0209 19:03:08.500295 1701 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 19:03:08.502698 update_engine[1701]: I0209 19:03:08.500322 1701 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 19:03:08.502698 update_engine[1701]: I0209 19:03:08.500327 1701 omaha_request_action.cc:271] Request: Feb 9 19:03:08.502698 update_engine[1701]: Feb 9 19:03:08.502698 update_engine[1701]: Feb 9 19:03:08.502698 update_engine[1701]: Feb 9 19:03:08.502698 update_engine[1701]: Feb 9 19:03:08.502698 update_engine[1701]: Feb 9 19:03:08.502698 update_engine[1701]: Feb 9 19:03:08.502698 update_engine[1701]: I0209 19:03:08.500332 1701 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:03:08.502698 update_engine[1701]: I0209 19:03:08.500480 1701 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:03:08.502698 update_engine[1701]: I0209 19:03:08.500633 1701 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:03:08.503494 update_engine[1701]: E0209 19:03:08.501191 1701 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:03:08.503494 update_engine[1701]: I0209 19:03:08.501266 1701 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 19:03:08.503494 update_engine[1701]: I0209 19:03:08.501272 1701 omaha_request_action.cc:621] Omaha request response: Feb 9 19:03:08.503494 update_engine[1701]: I0209 19:03:08.501277 1701 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 19:03:08.503494 update_engine[1701]: I0209 19:03:08.501281 1701 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 19:03:08.503494 update_engine[1701]: I0209 19:03:08.501284 1701 update_attempter.cc:306] Processing Done. Feb 9 19:03:08.503494 update_engine[1701]: I0209 19:03:08.501291 1701 update_attempter.cc:310] Error event sent. Feb 9 19:03:08.503494 update_engine[1701]: I0209 19:03:08.501298 1701 update_check_scheduler.cc:74] Next update check in 40m35s Feb 9 19:03:08.504846 locksmithd[1768]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 19:03:08.504846 locksmithd[1768]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 19:03:09.107670 systemd[1]: Started sshd@24-172.31.19.7:22-139.178.68.195:55834.service. Feb 9 19:03:09.114782 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:03:09.114872 kernel: audit: type=1130 audit(1707505389.107:495): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.19.7:22-139.178.68.195:55834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:09.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.19.7:22-139.178.68.195:55834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:09.272000 audit[6233]: USER_ACCT pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:09.273476 sshd[6233]: Accepted publickey for core from 139.178.68.195 port 55834 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:09.280549 kernel: audit: type=1101 audit(1707505389.272:496): pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:09.280000 audit[6233]: CRED_ACQ pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:09.281540 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:09.290015 systemd[1]: Started session-25.scope. Feb 9 19:03:09.291539 kernel: audit: type=1103 audit(1707505389.280:497): pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:09.291626 kernel: audit: type=1006 audit(1707505389.280:498): pid=6233 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 9 19:03:09.291679 kernel: audit: type=1300 audit(1707505389.280:498): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfa31d590 a2=3 a3=0 items=0 ppid=1 pid=6233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:09.280000 audit[6233]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfa31d590 a2=3 a3=0 items=0 ppid=1 pid=6233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:09.291888 systemd-logind[1698]: New session 25 of user core. Feb 9 19:03:09.280000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:03:09.301292 kernel: audit: type=1327 audit(1707505389.280:498): proctitle=737368643A20636F7265205B707269765D Feb 9 19:03:09.305000 audit[6233]: USER_START pid=6233 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:09.317717 kernel: audit: type=1105 audit(1707505389.305:499): pid=6233 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:09.308000 audit[6236]: CRED_ACQ pid=6236 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:09.325571 kernel: audit: type=1103 audit(1707505389.308:500): pid=6236 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:09.548273 sshd[6233]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:09.549000 audit[6233]: USER_END pid=6233 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:09.554924 systemd[1]: sshd@24-172.31.19.7:22-139.178.68.195:55834.service: Deactivated successfully. Feb 9 19:03:09.550000 audit[6233]: CRED_DISP pid=6233 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:09.557031 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:03:09.558251 systemd-logind[1698]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:03:09.559558 systemd-logind[1698]: Removed session 25. Feb 9 19:03:09.562571 kernel: audit: type=1106 audit(1707505389.549:501): pid=6233 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:09.562649 kernel: audit: type=1104 audit(1707505389.550:502): pid=6233 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:09.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.19.7:22-139.178.68.195:55834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:13.066256 systemd[1]: run-containerd-runc-k8s.io-7b68f7f0947b9dc802baf5f137268570e843849a31d7324725cf8b39423685e9-runc.Xoiem7.mount: Deactivated successfully. Feb 9 19:03:14.576543 systemd[1]: Started sshd@25-172.31.19.7:22-139.178.68.195:55836.service. Feb 9 19:03:14.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.19.7:22-139.178.68.195:55836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:14.578037 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:03:14.578134 kernel: audit: type=1130 audit(1707505394.576:504): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.19.7:22-139.178.68.195:55836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:14.782000 audit[6270]: USER_ACCT pid=6270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:14.786657 sshd[6270]: Accepted publickey for core from 139.178.68.195 port 55836 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:14.788000 audit[6270]: CRED_ACQ pid=6270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:14.792018 sshd[6270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:14.794592 kernel: audit: type=1101 audit(1707505394.782:505): pid=6270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:14.794687 kernel: audit: type=1103 audit(1707505394.788:506): pid=6270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:14.798256 kernel: audit: type=1006 audit(1707505394.788:507): pid=6270 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Feb 9 19:03:14.788000 audit[6270]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe26aa1d30 a2=3 a3=0 items=0 ppid=1 pid=6270 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:14.788000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:03:14.806854 kernel: audit: type=1300 audit(1707505394.788:507): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe26aa1d30 a2=3 a3=0 items=0 ppid=1 pid=6270 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:14.806939 kernel: audit: type=1327 audit(1707505394.788:507): proctitle=737368643A20636F7265205B707269765D Feb 9 19:03:14.811172 systemd-logind[1698]: New session 26 of user core. Feb 9 19:03:14.811962 systemd[1]: Started session-26.scope. Feb 9 19:03:14.820000 audit[6270]: USER_START pid=6270 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:14.829632 kernel: audit: type=1105 audit(1707505394.820:508): pid=6270 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:14.844858 kernel: audit: type=1103 audit(1707505394.827:509): pid=6273 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:14.827000 audit[6273]: CRED_ACQ pid=6273 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:15.159822 sshd[6270]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:15.161000 audit[6270]: USER_END pid=6270 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:15.173957 systemd[1]: sshd@25-172.31.19.7:22-139.178.68.195:55836.service: Deactivated successfully. Feb 9 19:03:15.175957 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:03:15.178540 kernel: audit: type=1106 audit(1707505395.161:510): pid=6270 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:15.178633 kernel: audit: type=1104 audit(1707505395.162:511): pid=6270 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:15.162000 audit[6270]: CRED_DISP pid=6270 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:15.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.19.7:22-139.178.68.195:55836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:15.186507 systemd-logind[1698]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:03:15.188455 systemd-logind[1698]: Removed session 26. Feb 9 19:03:20.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.19.7:22-139.178.68.195:47696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:20.186688 systemd[1]: Started sshd@26-172.31.19.7:22-139.178.68.195:47696.service. Feb 9 19:03:20.188584 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:03:20.188669 kernel: audit: type=1130 audit(1707505400.186:513): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.19.7:22-139.178.68.195:47696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:20.365000 audit[6295]: USER_ACCT pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:20.372981 kernel: audit: type=1101 audit(1707505400.365:514): pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:20.380784 kernel: audit: type=1103 audit(1707505400.372:515): pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:20.372000 audit[6295]: CRED_ACQ pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:20.380960 sshd[6295]: Accepted publickey for core from 139.178.68.195 port 47696 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:20.373773 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:20.385769 kernel: audit: type=1006 audit(1707505400.372:516): pid=6295 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Feb 9 19:03:20.382506 systemd[1]: Started session-27.scope. Feb 9 19:03:20.384659 systemd-logind[1698]: New session 27 of user core. Feb 9 19:03:20.372000 audit[6295]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2c208f10 a2=3 a3=0 items=0 ppid=1 pid=6295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:20.397527 kernel: audit: type=1300 audit(1707505400.372:516): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2c208f10 a2=3 a3=0 items=0 ppid=1 pid=6295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:20.372000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:03:20.398000 audit[6295]: USER_START pid=6295 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:20.412223 kernel: audit: type=1327 audit(1707505400.372:516): proctitle=737368643A20636F7265205B707269765D Feb 9 19:03:20.412330 kernel: audit: type=1105 audit(1707505400.398:517): pid=6295 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:20.401000 audit[6299]: CRED_ACQ pid=6299 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:20.418552 kernel: audit: type=1103 audit(1707505400.401:518): pid=6299 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:20.602796 sshd[6295]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:20.603000 audit[6295]: USER_END pid=6295 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:20.607000 audit[6295]: CRED_DISP pid=6295 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:20.623561 kernel: audit: type=1106 audit(1707505400.603:519): pid=6295 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:20.623717 kernel: audit: type=1104 audit(1707505400.607:520): pid=6295 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:20.624730 systemd[1]: sshd@26-172.31.19.7:22-139.178.68.195:47696.service: Deactivated successfully. Feb 9 19:03:20.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.19.7:22-139.178.68.195:47696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:20.626292 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 19:03:20.627908 systemd-logind[1698]: Session 27 logged out. Waiting for processes to exit. Feb 9 19:03:20.629467 systemd-logind[1698]: Removed session 27. Feb 9 19:03:22.159979 systemd[1]: run-containerd-runc-k8s.io-f2a0fede62648f90d07e75b7785bea58e5e8993ca6189ca8ac5316ee84453ec1-runc.gEMWxF.mount: Deactivated successfully. Feb 9 19:03:22.206150 systemd[1]: run-containerd-runc-k8s.io-60ff1f681498f286036df84ec5fda0a4dd600df7ac6f19052ea66b7afc0512da-runc.k12vin.mount: Deactivated successfully. Feb 9 19:03:25.647990 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:03:25.648168 kernel: audit: type=1130 audit(1707505405.630:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.19.7:22-139.178.68.195:47706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:25.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.19.7:22-139.178.68.195:47706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:25.630808 systemd[1]: Started sshd@27-172.31.19.7:22-139.178.68.195:47706.service. Feb 9 19:03:25.814000 audit[6348]: USER_ACCT pid=6348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:25.827926 kernel: audit: type=1101 audit(1707505405.814:523): pid=6348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:25.828177 sshd[6348]: Accepted publickey for core from 139.178.68.195 port 47706 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:25.827000 audit[6348]: CRED_ACQ pid=6348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:25.828745 sshd[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:25.840601 kernel: audit: type=1103 audit(1707505405.827:524): pid=6348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:25.842065 kernel: audit: type=1006 audit(1707505405.827:525): pid=6348 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Feb 9 19:03:25.827000 audit[6348]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef22b1240 a2=3 a3=0 items=0 ppid=1 pid=6348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.848403 systemd[1]: Started session-28.scope. Feb 9 19:03:25.850547 kernel: audit: type=1300 audit(1707505405.827:525): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef22b1240 a2=3 a3=0 items=0 ppid=1 pid=6348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:25.850389 systemd-logind[1698]: New session 28 of user core. Feb 9 19:03:25.827000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:03:25.861039 kernel: audit: type=1327 audit(1707505405.827:525): proctitle=737368643A20636F7265205B707269765D Feb 9 19:03:25.861259 kernel: audit: type=1105 audit(1707505405.859:526): pid=6348 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:25.859000 audit[6348]: USER_START pid=6348 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:25.864000 audit[6352]: CRED_ACQ pid=6352 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:25.874595 kernel: audit: type=1103 audit(1707505405.864:527): pid=6352 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:26.066593 sshd[6348]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:26.068000 audit[6348]: USER_END pid=6348 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:26.070686 systemd[1]: sshd@27-172.31.19.7:22-139.178.68.195:47706.service: Deactivated successfully. Feb 9 19:03:26.071811 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 19:03:26.068000 audit[6348]: CRED_DISP pid=6348 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:26.076023 systemd-logind[1698]: Session 28 logged out. Waiting for processes to exit. Feb 9 19:03:26.077459 systemd-logind[1698]: Removed session 28. Feb 9 19:03:26.080608 kernel: audit: type=1106 audit(1707505406.068:528): pid=6348 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:26.080678 kernel: audit: type=1104 audit(1707505406.068:529): pid=6348 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:26.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.19.7:22-139.178.68.195:47706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:27.402088 systemd[1]: run-containerd-runc-k8s.io-d1b341d8aceb126cadae0fc8420c502e061bd827544986cf6e326aa32a1a04a9-runc.E2nN9i.mount: Deactivated successfully. Feb 9 19:03:31.101638 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:03:31.101754 kernel: audit: type=1130 audit(1707505411.092:531): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-172.31.19.7:22-139.178.68.195:41390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:31.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-172.31.19.7:22-139.178.68.195:41390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:31.092958 systemd[1]: Started sshd@28-172.31.19.7:22-139.178.68.195:41390.service. Feb 9 19:03:31.252000 audit[6381]: USER_ACCT pid=6381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:31.253262 sshd[6381]: Accepted publickey for core from 139.178.68.195 port 41390 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:31.258562 kernel: audit: type=1101 audit(1707505411.252:532): pid=6381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:31.269582 kernel: audit: type=1103 audit(1707505411.258:533): pid=6381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:31.269701 kernel: audit: type=1006 audit(1707505411.258:534): pid=6381 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Feb 9 19:03:31.258000 audit[6381]: CRED_ACQ pid=6381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:31.266167 systemd[1]: Started session-29.scope. Feb 9 19:03:31.259732 sshd[6381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:31.267620 systemd-logind[1698]: New session 29 of user core. Feb 9 19:03:31.258000 audit[6381]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff043fd7c0 a2=3 a3=0 items=0 ppid=1 pid=6381 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:31.283618 kernel: audit: type=1300 audit(1707505411.258:534): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff043fd7c0 a2=3 a3=0 items=0 ppid=1 pid=6381 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:03:31.289716 kernel: audit: type=1327 audit(1707505411.258:534): proctitle=737368643A20636F7265205B707269765D Feb 9 19:03:31.289770 kernel: audit: type=1105 audit(1707505411.274:535): pid=6381 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:31.258000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:03:31.274000 audit[6381]: USER_START pid=6381 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:31.275000 audit[6385]: CRED_ACQ pid=6385 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:31.294607 kernel: audit: type=1103 audit(1707505411.275:536): pid=6385 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:31.451670 sshd[6381]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:31.453000 audit[6381]: USER_END pid=6381 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:31.456045 systemd[1]: sshd@28-172.31.19.7:22-139.178.68.195:41390.service: Deactivated successfully. Feb 9 19:03:31.457141 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 19:03:31.459868 systemd-logind[1698]: Session 29 logged out. Waiting for processes to exit. Feb 9 19:03:31.453000 audit[6381]: CRED_DISP pid=6381 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:31.461752 systemd-logind[1698]: Removed session 29. Feb 9 19:03:31.465319 kernel: audit: type=1106 audit(1707505411.453:537): pid=6381 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:31.465382 kernel: audit: type=1104 audit(1707505411.453:538): pid=6381 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 9 19:03:31.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-172.31.19.7:22-139.178.68.195:41390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:03:45.018037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ff46160ec7fed4f89ea4f88a6253050b3b53a9753cec5b566b4c2c6c2b80c86-rootfs.mount: Deactivated successfully. Feb 9 19:03:45.020731 env[1709]: time="2024-02-09T19:03:45.020464396Z" level=info msg="shim disconnected" id=0ff46160ec7fed4f89ea4f88a6253050b3b53a9753cec5b566b4c2c6c2b80c86 Feb 9 19:03:45.022928 env[1709]: time="2024-02-09T19:03:45.020742492Z" level=warning msg="cleaning up after shim disconnected" id=0ff46160ec7fed4f89ea4f88a6253050b3b53a9753cec5b566b4c2c6c2b80c86 namespace=k8s.io Feb 9 19:03:45.022928 env[1709]: time="2024-02-09T19:03:45.020769643Z" level=info msg="cleaning up dead shim" Feb 9 19:03:45.033085 env[1709]: time="2024-02-09T19:03:45.033030687Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:03:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6432 runtime=io.containerd.runc.v2\n" Feb 9 19:03:45.177421 kubelet[2906]: I0209 19:03:45.177345 2906 scope.go:115] "RemoveContainer" containerID="0ff46160ec7fed4f89ea4f88a6253050b3b53a9753cec5b566b4c2c6c2b80c86" Feb 9 19:03:45.192369 env[1709]: time="2024-02-09T19:03:45.192320534Z" level=info msg="CreateContainer within sandbox \"56eed4929461d7f61a278e45ad04401c6b337ada2d96291d8ffcf73614a82888\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Feb 9 19:03:45.227897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3633511517.mount: Deactivated successfully. Feb 9 19:03:45.280211 env[1709]: time="2024-02-09T19:03:45.280080950Z" level=info msg="CreateContainer within sandbox \"56eed4929461d7f61a278e45ad04401c6b337ada2d96291d8ffcf73614a82888\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"26f56b9ed5b9ad5ac86dbf32b53b1714efc0e6ca28ea7d8f8ed95efeb658af25\"" Feb 9 19:03:45.281849 env[1709]: time="2024-02-09T19:03:45.281807838Z" level=info msg="StartContainer for \"26f56b9ed5b9ad5ac86dbf32b53b1714efc0e6ca28ea7d8f8ed95efeb658af25\"" Feb 9 19:03:45.377333 env[1709]: time="2024-02-09T19:03:45.377274033Z" level=info msg="StartContainer for \"26f56b9ed5b9ad5ac86dbf32b53b1714efc0e6ca28ea7d8f8ed95efeb658af25\" returns successfully" Feb 9 19:03:45.397974 kubelet[2906]: E0209 19:03:45.397483 2906 controller.go:189] failed to update lease, error: Put "https://172.31.19.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-7?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:03:46.450390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccd564d6c54be83aac9b7466291c2a46052112ef463aab9c1ce91877996d64b8-rootfs.mount: Deactivated successfully. Feb 9 19:03:46.452371 env[1709]: time="2024-02-09T19:03:46.452318172Z" level=info msg="shim disconnected" id=ccd564d6c54be83aac9b7466291c2a46052112ef463aab9c1ce91877996d64b8 Feb 9 19:03:46.452805 env[1709]: time="2024-02-09T19:03:46.452374090Z" level=warning msg="cleaning up after shim disconnected" id=ccd564d6c54be83aac9b7466291c2a46052112ef463aab9c1ce91877996d64b8 namespace=k8s.io Feb 9 19:03:46.452805 env[1709]: time="2024-02-09T19:03:46.452387737Z" level=info msg="cleaning up dead shim" Feb 9 19:03:46.461863 env[1709]: time="2024-02-09T19:03:46.461816285Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:03:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6494 runtime=io.containerd.runc.v2\n" Feb 9 19:03:47.179594 kubelet[2906]: I0209 19:03:47.179565 2906 scope.go:115] "RemoveContainer" containerID="ccd564d6c54be83aac9b7466291c2a46052112ef463aab9c1ce91877996d64b8" Feb 9 19:03:47.182332 env[1709]: time="2024-02-09T19:03:47.182282762Z" level=info msg="CreateContainer within sandbox \"9131dad990590d883a50564154acbb4156a8f79704c42017b8ee01e55bb1478a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 19:03:47.212188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3588013014.mount: Deactivated successfully. Feb 9 19:03:47.220215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983624163.mount: Deactivated successfully. Feb 9 19:03:47.224974 env[1709]: time="2024-02-09T19:03:47.224927593Z" level=info msg="CreateContainer within sandbox \"9131dad990590d883a50564154acbb4156a8f79704c42017b8ee01e55bb1478a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"92d9ff43f7b6f293f65110470d6e7647f6123438fbe18530ed352d2118f10fb4\"" Feb 9 19:03:47.225565 env[1709]: time="2024-02-09T19:03:47.225534504Z" level=info msg="StartContainer for \"92d9ff43f7b6f293f65110470d6e7647f6123438fbe18530ed352d2118f10fb4\"" Feb 9 19:03:47.319629 env[1709]: time="2024-02-09T19:03:47.319576173Z" level=info msg="StartContainer for \"92d9ff43f7b6f293f65110470d6e7647f6123438fbe18530ed352d2118f10fb4\" returns successfully" Feb 9 19:03:50.505012 env[1709]: time="2024-02-09T19:03:50.504963445Z" level=info msg="shim disconnected" id=7bea63a03135bc8b1e83a103c7af836d9352537c178bc5e8c8367fd925b23701 Feb 9 19:03:50.506157 env[1709]: time="2024-02-09T19:03:50.506125559Z" level=warning msg="cleaning up after shim disconnected" id=7bea63a03135bc8b1e83a103c7af836d9352537c178bc5e8c8367fd925b23701 namespace=k8s.io Feb 9 19:03:50.506290 env[1709]: time="2024-02-09T19:03:50.506272536Z" level=info msg="cleaning up dead shim" Feb 9 19:03:50.517052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bea63a03135bc8b1e83a103c7af836d9352537c178bc5e8c8367fd925b23701-rootfs.mount: Deactivated successfully. Feb 9 19:03:50.533123 env[1709]: time="2024-02-09T19:03:50.533029661Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:03:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6575 runtime=io.containerd.runc.v2\n" Feb 9 19:03:51.198137 kubelet[2906]: I0209 19:03:51.198063 2906 scope.go:115] "RemoveContainer" containerID="7bea63a03135bc8b1e83a103c7af836d9352537c178bc5e8c8367fd925b23701" Feb 9 19:03:51.201575 env[1709]: time="2024-02-09T19:03:51.201531478Z" level=info msg="CreateContainer within sandbox \"a6f846aeb46a59423ee39392b8d000384db2a2a3ad6805934ff3a48448b8979c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 19:03:51.233132 env[1709]: time="2024-02-09T19:03:51.233080135Z" level=info msg="CreateContainer within sandbox \"a6f846aeb46a59423ee39392b8d000384db2a2a3ad6805934ff3a48448b8979c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7f0487716629eb11691f082b2ca3cb7790629bc2f16da34153576879595acfb5\"" Feb 9 19:03:51.234103 env[1709]: time="2024-02-09T19:03:51.234065932Z" level=info msg="StartContainer for \"7f0487716629eb11691f082b2ca3cb7790629bc2f16da34153576879595acfb5\"" Feb 9 19:03:51.371747 env[1709]: time="2024-02-09T19:03:51.371693047Z" level=info msg="StartContainer for \"7f0487716629eb11691f082b2ca3cb7790629bc2f16da34153576879595acfb5\" returns successfully" Feb 9 19:03:52.163933 systemd[1]: run-containerd-runc-k8s.io-f2a0fede62648f90d07e75b7785bea58e5e8993ca6189ca8ac5316ee84453ec1-runc.I5jWJu.mount: Deactivated successfully. Feb 9 19:03:52.276435 systemd[1]: run-containerd-runc-k8s.io-60ff1f681498f286036df84ec5fda0a4dd600df7ac6f19052ea66b7afc0512da-runc.DUVYUM.mount: Deactivated successfully. Feb 9 19:03:55.398391 kubelet[2906]: E0209 19:03:55.398285 2906 controller.go:189] failed to update lease, error: Put "https://172.31.19.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-7?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:04:05.404091 kubelet[2906]: E0209 19:04:05.403778 2906 controller.go:189] failed to update lease, error: Put "https://172.31.19.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-7?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)